Causation

Causation vs. Correlation: The Coffee and Baby Sleep Saga

Imagine you’re on your 24-hour call shift and you notice a strange trend: the nights you chug coffee after dinner, the clinic’s tiniest patients—babies—seem to cry more. Hmm, you think, “Does my coffee cause baby insomnia?”

Before you start banning coffee on call, let’s unpack two key concepts:

1️⃣ Correlation: This is when two things happen together. In this case, your coffee habit and baby tears seem to rise in tandem. Coincidence? Maybe. Correlation is like spotting two family members wearing plaid shirts at the same time—they’re connected somehow, but you can’t be sure it’s because they planned it.

2️⃣ Causation: This is when one thing directly causes another. If your coffee somehow made its way into those babies’ milk bottles, then you’d have causation.

Here’s the twist: just because two things are correlated doesn’t mean one causes the other. Maybe babies are just naturally fussier on stormy nights, and you drink coffee to power through those wild weather calls. The real culprit could be thunderclouds, not caffeine.

Key Takeaway: Don’t jump to conclusions! Always dig deeper, just like when a patient’s mysterious rash pops up. Is it an allergy (causation) or coincidence?

Nonadherence

Background: In Canada, many patients face substantial out-of-pocket costs for prescription medication, which may affect their ability to take their medications as prescribed. We sought to conduct a comprehensive analysis of the burden and predictors of cost-related nonadherence in Canada.

Methods: Using pooled data from the 2015, 2016, 2018, 2019, and 2020 iterations of the Canadian Community Health Survey, we calculated weighted population estimates of the burden of cost-related nonadherence in the preceding 12 months and used logistic regression models to measure the association of 15 demographic, health, and health system predictors of cost-related nonadherence overall and stratified by sex.

Results: We included 223 085 respondents. We found that 4.9% of respondents aged 12 years or older reported cost-related nonadherence. Those who self-identified as female, belonging to a racial or ethnic minority group, or bisexual, pansexual, or questioning were more likely to report cost-related nonadherence. Younger age, higher disease burden, poorer health, non-employer prescription drug coverage, and not living in the province of Quebec were associated with cost-related nonadherence.

Interpretation: Our nationally representative findings reveal inequities that disproportionally affect marginalized people at the intersections of sex, race, age, and disability, and vary by province. This foundational understanding of the state of cost-related nonadherence may be used to inform potential expansion of public drug coverage eligibility, premiums, and cost-sharing policies that address financial barriers to medication adherence.

Predictors of cost-related medication nonadherence in Canada: a repeated cross-sectional analysis of the Canadian Community Health Survey via CMAJ.

Patients & AI

“As artificial intelligence continues to develop in seemingly all facets of life — including health care — experts say it’s important for patients to know AI may be used in their care.

‘I think we’re going to see significant advances in AI use and AI capacity in the next few years,’ said Dr. Sian Tsuei, a family physician at Metrotown Urgent and Primary Care Centre in Burnaby, B.C.

‘I think we’re only seeing the start of it. So I would really encourage patients to continuously stay informed and for doctors to also be staying informed.’

Here are some AI risks and benefits Tsuei and other experts recommend you discuss with your health-care provider…

Read more on What patients should know about doctor visit summaries by AI via CBC.

Fast-Forward

“Boredom is unpleasant, with people going to great lengths to avoid it. One way to escape boredom and increase stimulation is to consume digital media, for example watching short videos on YouTube or TikTok. One common way that people watch these videos is to switch between videos and fast-forward through them, a form of viewing we call digital switching. Here, we hypothesize that people consume media this way to avoid boredom, but this behavior paradoxically intensifies boredom.

Across seven experiments (total N = 1,223; six preregistered), we found a bidirectional, causal relationship between boredom and digital switching. When participants were bored, they switched (Study 1), and they believed that switching would help them avoid boredom (Study 2). Switching between videos (Study 3) and within video (Study 4), however, led not to less boredom but more boredom; it also reduced satisfaction, reduced attention, and lowered meaning. Even when participants had the freedom to watch videos of personal choice and interest on YouTube, digital switching still intensified boredom (Study 5).

However, when examining digital switching with online articles and with nonuniversity samples, the findings were less conclusive (Study 6), potentially due to factors such as opportunity cost (Study 7). Overall, our findings suggest that attempts to avoid boredom through digital switching may sometimes inadvertently exacerbate it. When watching videos, enjoyment likely comes from immersing oneself in the videos rather than swiping through them. (PsycInfo Database Record (c) 2024 APA, all rights reserved).”

Read more on Fast-forward to boredom: How switching behavior on digital media makes people more bored via NIH.

Takeover

Human society has entered the age of artificial intelligence, medical practice and medical education are undergoing profound changes. Artificial intelligence (AI) is now applied in many industries, particularly in healthcare and medical education, where it deeply intersects. The purpose of this paper is to overview the current situation and problems of “AI+medicine/medical” education and to provide our own perspective on the current predicament. Methods: We searched PubMed, Embase, Cochrane and CNKI databases to assess the literature on AI+medical/medical education from 2017 to July 2022. The main inclusion criteria include literature describing the current situation or predicament of “AI+medical/medical education”. Results: Studies have shown that the current application of AI in medical education is focused on clinical specialty training and continuing education, with the main application areas being radiology, diagnostics, surgery, cardiology, and dentistry. The main role is to assist physicians to improve their efficiency and accuracy. In addition, the field of combining AI with medicine/medical education is steadily expanding, and the most urgent need is for policy makers, experts in the medical field, AI and education, and experts in other fields to come together to reach consensus on ethical issues and develop regulatory standards. Our study also found that most medical students are positive about adding AI-related courses to the existing medical curriculum. Finally, the quality of research on “AI+medical/medical education” is poor. Conclusion: In the context of the COVID-19 pandemic, our study provides an innovative systematic review of the latest “AI+medicine/medical curriculum”. Since the AI+medicine curriculum is not yet regulated, we have made some suggestions.

More on Artificial intelligence for healthcare and medical education: a systematic review via Am J Transl Res.

C6H12O6

Early blood glucose control for people with type 2 diabetes is crucial for reducing complications and prolonging life

“These latest results from the UK Prospective Diabetes Study (UKPDS), one of the longest ever clinical trials in type 2 diabetes, were made feasible by incorporating NHS data.

Professor Rury Holman of Oxford’s Radcliffe Department of Medicine, the founding Director of the University of Oxford Diabetes Trials Unit and Chief Investigator of the UKPDS, said, ‘These remarkable findings emphasise the critical importance of detecting and treating type 2 diabetes intensively at the earliest possible opportunity.

‘People may have type 2 diabetes for several years before being diagnosed as they may have few symptoms until their blood sugars become substantially elevated.’”

Learn more via University of Oxford.

Rules

In his essay “Politics and the English Language” (1946), Orwell wrote about the importance of precise and clear language, arguing that vague writing can be used as a powerful tool of political manipulation. In that essay, Orwell provides six rules for writers:

  1. Never use a metaphor, simile or other figure of speech which you are used to seeing in print.
  2. Never use a long word where a short one will do.
  3. If it is possible to cut a word out, always cut it out.
  4. Never use the passive where you can use the active.
  5. Never use a foreign phrase, a scientific word or a jargon word if you can think of an everyday English equivalent.
  6. Break any of these rules sooner than say anything outright barbarous.[183]

Machine Learning

“Machine learning hit the public awareness after spectacular advances in language translation and image recognition. These are typically problems of classification — does a photo show a poodle, a Chihuahua or perhaps just a blueberry muffin? Surprisingly, the latter two look quite similar (E. Togootogtokh and A. Amartuvshin, preprint at https://arxiv.org/abs/1801.09573; 2018). Less widely known is that machine learning for classification has an even longer history in the physical sciences. Recent improvements coming from so-called ‘deep learning’ algorithms and other neural networks have served to make such applications more powerful.”

Read more on The Power of Machine Learning via Nature Physics

“Recently, artificial intelligence and machine-learning algorithms have gained significant attention in the field of osteoporosis [1]. They are recognized for their potential in exploring new research fields, including the investigation of novel risk factors and the prediction of osteoporosis, falls, and fractures by leveraging biological testing, imaging, and clinical data [2]. This new approach might improve the performance of current fracture prediction models by including all possible variables such as the bone mineral density (BMD) of all sites as well as trabecular bone score (TBS) data [3]. Also, the new model could suggest novel factors that could influence the fracture by calculating all variables through a deep learning network. Although there are a few studies in osteoporosis and fracture prediction using machine learning [46], a fracture-prediction machine-learning model with a longitudinal, large-sized cohort study including BMD and TBS has not been developed [3].”

Read more on Clinical Applicability of Machine Learning in Family Medicine via Korean J Family Medicine.

Truth

“In their new paper ‘Do large language models have a legal duty to tell the truth?‘, published by the Royal Society Open Science, the Oxford researchers set out how LLMs produce responses that are plausible, helpful and confident but contain factual inaccuracies, misleading references and biased information.  They term this problematic phenomenon as ‘careless speech’ which they believe causes long-term harms to science, education and society.

Lead author Professor Sandra Wachter, Professor of Technology and Regulation, Oxford Internet Institute explains: ‘LLMs pose a unique risk to science, education, democracy, and society that current legal frameworks did not anticipate.  This is what we call ‘careless speech’ or speech that lacks appropriate care for truth.  Spreading careless speech causes subtle, immaterial harms that are difficult to measure over time. It leads to the erosion of truth, knowledge and shared history and can have serious consequences for evidence-based policy-making in areas where details and truth matter such as health care, finance, climate change, media, the legal profession, and education. In our new paper, we aim to address this gap by analysing the feasibility of creating a new legal duty requiring LLM providers to create AI models that, put simply, will ‘tell the truth’.”

Read more on Large Language Models pose a risk to society and need tighter regulation via University of Oxford.