Patients & AI

“As artificial intelligence continues to develop in seemingly all facets of life — including health care — experts say it’s important for patients to know AI may be used in their care.

‘I think we’re going to see significant advances in AI use and AI capacity in the next few years,’ said Dr. Sian Tsuei, a family physician at Metrotown Urgent and Primary Care Centre in Burnaby, B.C.

‘I think we’re only seeing the start of it. So I would really encourage patients to continuously stay informed and for doctors to also be staying informed.’

Here are some AI risks and benefits Tsuei and other experts recommend you discuss with your health-care provider…

Read more on What patients should know about doctor visit summaries by AI via CBC.

Fast-Forward

“Boredom is unpleasant, with people going to great lengths to avoid it. One way to escape boredom and increase stimulation is to consume digital media, for example watching short videos on YouTube or TikTok. One common way that people watch these videos is to switch between videos and fast-forward through them, a form of viewing we call digital switching. Here, we hypothesize that people consume media this way to avoid boredom, but this behavior paradoxically intensifies boredom.

Across seven experiments (total N = 1,223; six preregistered), we found a bidirectional, causal relationship between boredom and digital switching. When participants were bored, they switched (Study 1), and they believed that switching would help them avoid boredom (Study 2). Switching between videos (Study 3) and within video (Study 4), however, led not to less boredom but more boredom; it also reduced satisfaction, reduced attention, and lowered meaning. Even when participants had the freedom to watch videos of personal choice and interest on YouTube, digital switching still intensified boredom (Study 5).

However, when examining digital switching with online articles and with nonuniversity samples, the findings were less conclusive (Study 6), potentially due to factors such as opportunity cost (Study 7). Overall, our findings suggest that attempts to avoid boredom through digital switching may sometimes inadvertently exacerbate it. When watching videos, enjoyment likely comes from immersing oneself in the videos rather than swiping through them. (PsycInfo Database Record (c) 2024 APA, all rights reserved).”

Read more on Fast-forward to boredom: How switching behavior on digital media makes people more bored via NIH.

Takeover

Human society has entered the age of artificial intelligence, medical practice and medical education are undergoing profound changes. Artificial intelligence (AI) is now applied in many industries, particularly in healthcare and medical education, where it deeply intersects. The purpose of this paper is to overview the current situation and problems of “AI+medicine/medical” education and to provide our own perspective on the current predicament. Methods: We searched PubMed, Embase, Cochrane and CNKI databases to assess the literature on AI+medical/medical education from 2017 to July 2022. The main inclusion criteria include literature describing the current situation or predicament of “AI+medical/medical education”. Results: Studies have shown that the current application of AI in medical education is focused on clinical specialty training and continuing education, with the main application areas being radiology, diagnostics, surgery, cardiology, and dentistry. The main role is to assist physicians to improve their efficiency and accuracy. In addition, the field of combining AI with medicine/medical education is steadily expanding, and the most urgent need is for policy makers, experts in the medical field, AI and education, and experts in other fields to come together to reach consensus on ethical issues and develop regulatory standards. Our study also found that most medical students are positive about adding AI-related courses to the existing medical curriculum. Finally, the quality of research on “AI+medical/medical education” is poor. Conclusion: In the context of the COVID-19 pandemic, our study provides an innovative systematic review of the latest “AI+medicine/medical curriculum”. Since the AI+medicine curriculum is not yet regulated, we have made some suggestions.

More on Artificial intelligence for healthcare and medical education: a systematic review via Am J Transl Res.

C6H12O6

Early blood glucose control for people with type 2 diabetes is crucial for reducing complications and prolonging life

“These latest results from the UK Prospective Diabetes Study (UKPDS), one of the longest ever clinical trials in type 2 diabetes, were made feasible by incorporating NHS data.

Professor Rury Holman of Oxford’s Radcliffe Department of Medicine, the founding Director of the University of Oxford Diabetes Trials Unit and Chief Investigator of the UKPDS, said, ‘These remarkable findings emphasise the critical importance of detecting and treating type 2 diabetes intensively at the earliest possible opportunity.

‘People may have type 2 diabetes for several years before being diagnosed as they may have few symptoms until their blood sugars become substantially elevated.’”

Learn more via University of Oxford.

Rules

In his essay “Politics and the English Language” (1946), Orwell wrote about the importance of precise and clear language, arguing that vague writing can be used as a powerful tool of political manipulation. In that essay, Orwell provides six rules for writers:

  1. Never use a metaphor, simile or other figure of speech which you are used to seeing in print.
  2. Never use a long word where a short one will do.
  3. If it is possible to cut a word out, always cut it out.
  4. Never use the passive where you can use the active.
  5. Never use a foreign phrase, a scientific word or a jargon word if you can think of an everyday English equivalent.
  6. Break any of these rules sooner than say anything outright barbarous.[183]

Machine Learning

“Machine learning hit the public awareness after spectacular advances in language translation and image recognition. These are typically problems of classification — does a photo show a poodle, a Chihuahua or perhaps just a blueberry muffin? Surprisingly, the latter two look quite similar (E. Togootogtokh and A. Amartuvshin, preprint at https://arxiv.org/abs/1801.09573; 2018). Less widely known is that machine learning for classification has an even longer history in the physical sciences. Recent improvements coming from so-called ‘deep learning’ algorithms and other neural networks have served to make such applications more powerful.”

Read more on The Power of Machine Learning via Nature Physics

“Recently, artificial intelligence and machine-learning algorithms have gained significant attention in the field of osteoporosis [1]. They are recognized for their potential in exploring new research fields, including the investigation of novel risk factors and the prediction of osteoporosis, falls, and fractures by leveraging biological testing, imaging, and clinical data [2]. This new approach might improve the performance of current fracture prediction models by including all possible variables such as the bone mineral density (BMD) of all sites as well as trabecular bone score (TBS) data [3]. Also, the new model could suggest novel factors that could influence the fracture by calculating all variables through a deep learning network. Although there are a few studies in osteoporosis and fracture prediction using machine learning [46], a fracture-prediction machine-learning model with a longitudinal, large-sized cohort study including BMD and TBS has not been developed [3].”

Read more on Clinical Applicability of Machine Learning in Family Medicine via Korean J Family Medicine.

Truth

“In their new paper ‘Do large language models have a legal duty to tell the truth?‘, published by the Royal Society Open Science, the Oxford researchers set out how LLMs produce responses that are plausible, helpful and confident but contain factual inaccuracies, misleading references and biased information.  They term this problematic phenomenon as ‘careless speech’ which they believe causes long-term harms to science, education and society.

Lead author Professor Sandra Wachter, Professor of Technology and Regulation, Oxford Internet Institute explains: ‘LLMs pose a unique risk to science, education, democracy, and society that current legal frameworks did not anticipate.  This is what we call ‘careless speech’ or speech that lacks appropriate care for truth.  Spreading careless speech causes subtle, immaterial harms that are difficult to measure over time. It leads to the erosion of truth, knowledge and shared history and can have serious consequences for evidence-based policy-making in areas where details and truth matter such as health care, finance, climate change, media, the legal profession, and education. In our new paper, we aim to address this gap by analysing the feasibility of creating a new legal duty requiring LLM providers to create AI models that, put simply, will ‘tell the truth’.”

Read more on Large Language Models pose a risk to society and need tighter regulation via University of Oxford.

Conflict

“Nearly 500 Quebec doctors have signed an open letter demanding their medical associations denounce the crisis in Gaza and call for an immediate ceasefire and access to humanitarian aid.

‘We, physicians in Quebec, are deeply concerned with the humanitarian catastrophe in Gaza that worsens each day,’ reads the letter, published Thursday morning. ‘One hundred and fifty eight days of devastation, 31,272 killed and 73,024 injured, 1.5 million refugees. Remaining silent in the face of suffering of this magnitude is contrary to our role as physicians and a forsaking of our shared humanity.’

Included among the signatories are Joanne Liu, former international president of Médecins Sans Frontières/Doctors Without Borders and a professor at McGill University’s School of Population and Global Health, and Amir Khadir, former Québec solidaire MNA for the Mercier riding and a specialist in infectious diseases.

The petition is calling on four provincial medical associations — the Collège des médecins du Québec, the Fédération des médecins omnipraticiens du Québec, the Fédération des Médecins spécialistes du Québec, and the Collège québécois des médecins de famille — to issue a statement demanding an immediate ceasefire, immediate access to drinkable water, an end to blockades preventing entry of medical supplies and the release of hostages on both sides of the conflict.

The idea for the open letter originated on Facebook, where some Quebec doctors involved in groups on the social media site voiced the distress they were feeling over the war. Last week, a few started their own Facebook page, titled ‘Quebec doctors against the genocide in Gaza,‘ that quickly drew more than 500 members.”

Read more on “Quebec doctors sign open letter demanding ceasefire in Gaza: Remaining silent in the face of suffering of this magnitude is contrary to our role as physicians” via The Gazette.

Photograph Copyright Ahmad Hasaballah/Getty Images