Last week, I had the opportunity to lead an academic session with incoming family medicine residents on one of the most pressing issues in modern healthcare: bias and confabulation in clinical AI tools.
We explored:
+ Real-world cases of AI bias, such as how LLMs alter triage and diagnostic suggestions based solely on patient demographics.
+ Confabulation traps where AI fabricates confident-sounding (but incorrect) medical guidelines.
+ Interactive bias testing: residents input identical chest pain cases into multiple AI tools, tweaking only the patient’s background to examine how different platforms analyze and articulate the patient’s management.
+ Ethical and legal dilemmas: including what happens when a chatbot contributes to chart notes, and whether disclosure is required.
We closed with this question:
+ What safeguard will you commit to using in your own practice to reduce the risk of AI misinformation entering the patient record?
Teaching AI literacy is about clinical discernment, ethical awareness, and training tomorrow’s physicians to engage AI with both curiosity and caution.
Grateful to this next generation of residents for their sharp thinking and thoughtful engagement.
AI Bias