In my recent MSc dissertation at the University of Oxford, I explored how medical students experience and perceive artificial intelligence in their learning environment. One thing struck me in their comments: it’s not just AI, it’s the looming feeling of being watched, and not always knowing by whom.
During my integrative literature review, I learned that AI may impact one’s psychological safety as they tended to patients and interacted with colleagues.
As more AI infused tools promise “continuous data” on performance, students described how AI in the hospital setting could become part of their performance evaluation. And not just end of rotation feedback, but a kind of 24/7 visibility. Just the idea of being continuously monitored was enough for students to express that this caused them stress and anxiety.
What stood out to me is that students were not opposed to AI. Many are excited, and eager to learn more about how the technology will be integrated into medicine. What they were asking for was something more basic: transparency, consent, and clear boundaries.
As we adopt AI into medical education, we need to design and integrate for psychological safety. Otherwise, we risk teaching the next generation of physicians to perform for the system, rather than to think with and for their patients. It reminds me of Mayo’s Hawthorne effect and the potential for AI use as a surveillance tool and form of manipulation to boost productivity.
Over the next while I’ll be sharing a few short reflections from this research, paired with my own photography, as a way to keep this conversation human, creative, and thoughtful.
In hindsight, 2025
Photographer: Jacqueline P. Ashby
Kelvingrove Art Gallery and Museum
Hindsight