This evening I attended the “From Hype to Hospital: How AI is being used in Healthcare and Research” hosted in British Columbia, Canada.
We’re surrounded by data in our healthcare system, but our ability to convert it into timely, trustworthy decisions is still limited by workflow, infrastructure, and governance. Provincial data collection continues to be labour intensive, often manual, and delayed.
As it was reported this evening, in trauma care, there can be a 12-18 month lag between what happens in the Emergency Department and Trauma Service, and what ultimately lands in registries, dashboards, and system-level reports. Check out the article “iROBOT: Implementing Real-time Operational dashBOards for Trauma care” to learn more: https://lnkd.in/gvBQKMgs
Other interesting points from presenters include:
+ Structured data is easy to analyze, narrative data holds the nuance that can change risk and interpretation. + AI can speed screening and reporting, reduce false positives, and support real-time dashboards, if evaluated honestly. + In BC, common use cases are emerging: early warning (sepsis, deterioration), staffing and scheduling, operational intelligence. + The hard part is the pipeline: discovery to pilot to scaled deployment, many projects stall before impact. + Implementation risks are real: trust (confabulation, over-reliance), privacy, environmental cost, workforce disruption.
My takeaway: We have a responsibility to educate and train healthcare practitioners in the use of AI, and to start asking critical questions about how it will affect patient care.
Slides attached are from Graham Payette’s AI BC briefing.
Today I get to say something I have dreamed of saying for years: we did it!
The Primary Compassionate Care Initiative now has a place to call home. Our Primary Compassionate Care Hub is real, it is built, and it is open. It is bright, welcoming, and intentionally designed for what matters most, people, learning, and community.
This Hub has been a long time coming. It grew out of Dr. Aisha Liman’s simple belief that keeps guiding everything we do at the Primary Compassionate Care Initiative: care is not only a clinical act, it is a culture we build together. We wanted a home for that culture, a place where learners, mentors, clinicians, and community members can gather, teach, and grow side by side.
A space designed for learning that feels human
When you walk in, you can feel the purpose immediately. The Hub includes a training and teaching room set up for workshops, small group sessions, and hands-on learning. There are flexible chairs and work tables, a presentation area, and a layout that supports real interaction, not passive sitting.
We also created comfortable tiered seating with bright cushions, a simple detail that quietly changes the energy of a room. It invites discussion. It invites listening. It invites people to stay.
In other words, this is not a room designed only to deliver content. It is a room designed to build confidence, conversation, and competence.
A “community begins with us” moment, made physical
One of my favorite elements is the message at the entrance: “Community begins with us.” It captures what we are trying to do, not just teach skills, but create a place where belonging and responsibility are visible, shared, and practiced.
This Hub is meant to be a meeting point for learners and the communities they serve. It is a space where we can run mentorship sessions, leadership development, health education programming, team-building activities, and practical skill-building workshops grounded in compassion.
Yes, we even built the warmth into the walls
There is another sign in the space that made me smile because it is exactly right: “Compassion is Brewing.” It sits above a warm, welcoming café-style area that feels like a natural gathering place. Because sometimes the most meaningful learning happens before the session starts, during the conversations, the laughter, the quiet questions, and the “Can I ask you something?” moments.
We wanted the Hub to feel energizing and calm at the same time. The lighting, the wood tones, the bright colors, and the clean design all contribute to that balance. It feels professional, but not sterile. It feels modern, but still personal.
What this Hub will be used for
This Hub is a working space, and we have big plans for it, including:
mentorship and academic support for health practitioner learners
workshops on communication, teamwork, and patient-centered care
community health education sessions
faculty and preceptor development
collaborative project meetings and design sessions
leadership and professional identity development
Most importantly, it will be a place where learners can practice building trust, showing up with humility, and delivering care that is both competent and kind.
Gratitude, and what comes next
I am deeply grateful to everyone who helped make this possible. A space like this is never built by one person. It is built by shared values, steady work, and people who refuse to let good ideas stay only in the imagination.
This Hub is a milestone, and it is also a beginning.
Because now we get to do what we came here to do: grow a new generation of compassionate, community-grounded health professionals, and support them with the structure, mentorship, and environment they deserve.
Thank you for believing in this dream with us.
More soon, and if you are nearby and want to visit, collaborate, or support a workshop, I would love to hear from you.
When Disaster Hits, Family Medicine Is Still the Front Door
Disasters and major trauma can feel like “someone else’s job” until the day your clinic, urgent care, or community hospital becomes the first place people arrive. In those moments, what matters most is not just clinical knowledge, it is teamwork, role clarity, and a shared plan.
A useful option is Trauma and Disaster Team Response, a free online course on SURGhub, offered through the McGill University Health Centre, Centre for Global Surgery. It is built around multidisciplinary trauma and disaster response, with lectures and quizzes, and it is designed to strengthen how teams function under pressure.
Why it matters for family medicine
Family physicians are often central to stabilization, triage, transfer decisions, and supporting staff and communities in the aftermath. This training can help build a common language for response, especially in rural and community settings where resources and staffing can shift quickly.
What you can take back to your team
Clearer roles during urgent resuscitation and surge situations
More confidence with transfer readiness and escalation
A framework for thinking about disaster response as a system, not just a single patient
A nudge to turn preparedness into practical clinic improvements (call trees, checklists, short drills)
Learn more here at McGill University’s SURGhub. And a big thank you to Ali for sharing this resource with me. His work in this area is incredibly admirable and much needed.
“How much do we really know about the plants and flowers in our gardens and vases? Beyond their beauty, many have surprising stories of exploration, exchange, and discovery. In Bloom takes visitors from Oxford across the world and back, tracing the journeys that some of Britain’s most familiar blooms travelled to get here. Featuring more than 100 artworks, including beautiful botanical paintings and drawings, historical curiosities and new work by contemporary artists, the exhibition follows the passion and ingenuity of early plant explorers and the networks that influenced science, global trade and consumption. Visitors will learn how plants changed our world and left a legacy that still shapes our environments and back gardens today.” ~ Ashmolean Museum, Bloom Exhibit 2026
“The General Medical Council (GMC) states that doctors ‘are responsible for the decisions they make when using new technologies like AI, and should work only within their competence.’15 This coincides with the World Medical Association calling for reviewing medical curricula and education for all healthcare stakeholders to improve understanding of the risks and benefits of AI in healthcare.16 It follows then that in fostering good medical practice, medical schools must prepare students for the clinical environment that awaits them through building competence and familiarity in this evolving domain.
With 2 in 3 physicians using AI in their clinical practice, an increase of 78% from 2023,17 enthusiasm for the technology is rapidly growing. Yet, despite this uptake, a 2024 international survey of over 4500 students across 192 medical, dental, and veterinary faculties found that over 75% reported no formal AI education in their curriculum, highlighting a critical gap between technological advancement and medical training.18 This discrepancy underscores the urgency for medical schools to proactively incorporate AI teaching to ensure graduates are ready for the realities of modern clinical practice.”
Read more on Artificial Intelligence in Medical Education: Promise, Pitfalls, and Practical Pathways here.
Succi, Chang, and Rao argue that medical education needs a deliberate redesign for an AI-rich clinical world, not just a bolt-on “AI lecture” or a new tool in the curriculum.
Core argument
AI, especially large language models (LLMs), is already strong at many clinical-adjacent tasks (documentation, communication support, test-style questions), but performance on benchmarks does not equal genuine clinical reasoning.
Medical education’s job is to teach reasoning processes and adaptability, not just factual recall or pattern recognition. LLMs can look convincing while still producing plausible but shallow outputs.
Why current LLM success is not the same as clinical reasoning
The authors emphasize that LLMs often operate via statistical pattern matching, so they can generate confident answers triggered by “buzzwords” or common feature clusters.
Real clinical reasoning is dynamic: new symptoms appear, data conflict, hypotheses evolve, uncertainty persists. Exams with single best answers do not capture that.
What needs to change in assessment and benchmarking
They call for new benchmarks that require models to reason step by step through complex cases, justify decisions, and iteratively refine a diagnosis or plan as information changes.
Validating AI in the education setting, where reasoning can be scrutinized, is presented as a pathway toward trustworthy clinical decision support later.
How AI could reshape teaching and learning
If LLMs become better at transparent reasoning, they could function as case-based learning partners: tutors, critics of student logic, graders, and discussion counterparts.
LLMs could help learners at all stages parse difficult materials, including curricula, textbooks, and biomedical literature, which supports lifelong learning in a fast-moving field.
AI could expand clinical exposure beyond “the patients you happen to see” by generating many varied presentations, including rare diseases and culturally distinct scenarios.
SP-LLMs (standardized patient LLMs)
The article highlights the idea of LLM-powered standardized patient interactions that can be used for practice and evaluation of communication skills, including exposure to rare and diverse presentations.
Equity and access
The authors argue LLMs could democratize medical education by distributing expertise at scale, supporting resource-limited settings and schools with lower patient diversity or volume.
They note that equitable access will require thoughtful licensing models and partnerships between well-resourced and resource-constrained institutions.
What the “AI-enabled physician” must become
As AI takes on routine tasks, physicians should shift toward higher-level responsibilities: strong clinical reasoning, data interpretation, and ethical oversight of algorithmic outputs.
Curricula should include “data systems literacy” so future physicians can critically appraise and safely integrate AI outputs into care.
A non-negotiable: dual competency
The authors stress that technical sophistication must not erode foundational clinical skills. Systems fail, downtime happens, breaches occur, and public health crises arise.
Training should explicitly reinforce operating both with and without AI, through exercises that require history, exam, and differential diagnosis without digital aids.
Bottom line Medical schools should integrate AI in ways that strengthen, rather than replace, rigorous reasoning, empathy, and moral judgment. This requires honest engagement with AI limits, new forms of assessment, and collaboration between clinicians, educators, and machine learning experts.
Experts from UVic and Island Health discuss safety, evidence, and patient impact of artificial intelligence (AI) in healthcare and research. Learn more and register here.
Purpose of the report OpenAI’s AI as a Healthcare Ally report explains how ChatGPT and related AI tools are increasingly being used by both patients and healthcare workers to navigate the complex healthcare system, interpret information, and support care decisions. It highlights emerging patterns of use and the potential role of AI as a complement to traditional healthcare rather than a replacement.
Key findings
AI usage for health questions is widespread: over 5% of all ChatGPT interactions globally are health-related; many users ask about symptoms, treatments, medications, insurance, billing and more.
Millions of people use AI daily: Tens of millions of users (over 40 million globally) consult ChatGPT for health information each day, and roughly one in four users engages with health topics weekly.
Health queries often occur outside usual clinic hours, reflecting demand when providers are less accessible.
AI helps with access barriers and complexity: Users in rural or underserved areas use the tool heavily to interpret medical information and navigate administrative tasks such as insurance coverage.
Clinicians also use AI: Many physicians and nurses report using AI for documentation, admin tasks, and clinical support, suggesting integration into routine practice.
Overall message The report positions ChatGPT and similar large-language models as informal entry points into healthcare, helping users make sense of medical information, plan care, and reduce complexity. It frames AI as supportive and complementary to clinicians, while acknowledging the need for appropriate safeguards and professional involvement.
If AI can explain your lab results and medication instructions at 11:30 pm, what should “good” use look like, and where should the line be (education vs advice, reassurance vs diagnosis)?
If clinicians are already using AI to help with notes and messages, what do you think should be transparent to patients, and what safeguards would make you feel comfortable?
If millions of people are using AI because the healthcare system is hard to access or hard to navigate, is that a smart workaround, a warning sign, or both?