Spring is leaving
Birds cry, and in the fishes’ eyes
are tears.
~ Chiyo-ni
Researcher ◆ Climber ◆ West Coaster
Spring is leaving
Birds cry, and in the fishes’ eyes
are tears.
~ Chiyo-ni
“How much do we really know about the plants and flowers in our gardens and vases? Beyond their beauty, many have surprising stories of exploration, exchange, and discovery. In Bloom takes visitors from Oxford across the world and back, tracing the journeys that some of Britain’s most familiar blooms travelled to get here. Featuring more than 100 artworks, including beautiful botanical paintings and drawings, historical curiosities and new work by contemporary artists, the exhibition follows the passion and ingenuity of early plant explorers and the networks that influenced science, global trade and consumption. Visitors will learn how plants changed our world and left a legacy that still shapes our environments and back gardens today.” ~ Ashmolean Museum, Bloom Exhibit 2026
Art by Claire Desjardins
“The General Medical Council (GMC) states that doctors ‘are responsible for the decisions they make when using new technologies like AI, and should work only within their competence.’15 This coincides with the World Medical Association calling for reviewing medical curricula and education for all healthcare stakeholders to improve understanding of the risks and benefits of AI in healthcare.16 It follows then that in fostering good medical practice, medical schools must prepare students for the clinical environment that awaits them through building competence and familiarity in this evolving domain.
With 2 in 3 physicians using AI in their clinical practice, an increase of 78% from 2023,17 enthusiasm for the technology is rapidly growing. Yet, despite this uptake, a 2024 international survey of over 4500 students across 192 medical, dental, and veterinary faculties found that over 75% reported no formal AI education in their curriculum, highlighting a critical gap between technological advancement and medical training.18 This discrepancy underscores the urgency for medical schools to proactively incorporate AI teaching to ensure graduates are ready for the realities of modern clinical practice.”
Read more on Artificial Intelligence in Medical Education: Promise, Pitfalls, and Practical Pathways here.
Succi, Chang, and Rao argue that medical education needs a deliberate redesign for an AI-rich clinical world, not just a bolt-on “AI lecture” or a new tool in the curriculum.
Core argument
Why current LLM success is not the same as clinical reasoning
What needs to change in assessment and benchmarking
How AI could reshape teaching and learning
SP-LLMs (standardized patient LLMs)
Equity and access
What the “AI-enabled physician” must become
A non-negotiable: dual competency
Bottom line
Medical schools should integrate AI in ways that strengthen, rather than replace, rigorous reasoning, empathy, and moral judgment. This requires honest engagement with AI limits, new forms of assessment, and collaboration between clinicians, educators, and machine learning experts.
Read more on Building the AI-Enabled Medical School of the Future by Succi, Chang, and Rao.
Experts from UVic and Island Health discuss safety, evidence, and patient impact of artificial intelligence (AI) in healthcare and research. Learn more and register here.

Purpose of the report
OpenAI’s AI as a Healthcare Ally report explains how ChatGPT and related AI tools are increasingly being used by both patients and healthcare workers to navigate the complex healthcare system, interpret information, and support care decisions. It highlights emerging patterns of use and the potential role of AI as a complement to traditional healthcare rather than a replacement.
Key findings
Overall message
The report positions ChatGPT and similar large-language models as informal entry points into healthcare, helping users make sense of medical information, plan care, and reduce complexity. It frames AI as supportive and complementary to clinicians, while acknowledging the need for appropriate safeguards and professional involvement.
If AI can explain your lab results and medication instructions at 11:30 pm, what should “good” use look like, and where should the line be (education vs advice, reassurance vs diagnosis)?
If clinicians are already using AI to help with notes and messages, what do you think should be transparent to patients, and what safeguards would make you feel comfortable?
If millions of people are using AI because the healthcare system is hard to access or hard to navigate, is that a smart workaround, a warning sign, or both?
Share your thoughts in the comment section.
Read the most popular JAMA articles in 2025 including coffee and AFib, osteoporosis, platelet transfusion, type 2 diabetes, S aureus bacteremia, septic shock, and more.
If you have ever looked through SEC filings on EDGAR, you know most of it is dry, structured, and painfully consistent. That is why a Form D where multiple people’s middle names appear to have been changed to “Ryan” jumps off the page. It is not “proof” of anything on its own, I’m sure co-founder of Revolutionary Clinics, Gregory Ryan Ansin would agree. But it is absolutely the kind of anomaly that warrants a closer look. C D Services of America, LLC sat “above” Revolutionary Clinics (at least the entity called Revolutionary Clinics II, Inc.) as an ownership and financing vehicle. When Revolutionary Clinics went into financial distress and litigation (recent reporting): reporting on the receivership and lawsuits describes Revolutionary Clinics and affiliates including Revolutionary Growers and C D Services of America as being involved together in debt, lender actions, or rent disputes.


Form D is a notice filing, not a full registration statement. Companies use it to notify the SEC that they sold securities without registering the offering, typically relying on an exemption under Regulation D (Rule 504 or Rule 506) or certain statutory exemptions.
Form D filings are made electronically on the SEC’s EDGAR system, and after filing, they become publicly available.

Investor.gov describes Form D as a “brief notice” that typically includes identifying details like the names and addresses of key people associated with the issuer, plus basic information about the offering. investor.gov
In a normal Form D, names are not a creative writing exercise. They are identifiers. When you see a pattern like many middle names being replaced with the same word, it raises practical questions such as:
There are innocent explanations (sloppy drafting, formatting bugs, or a misguided “joke” that made it into a legal filing). There are also more concerning explanations (attempts to confuse, obscure, or mislead). A single filing does not let you conclude motive, but it does justify scrutiny.
A Regulation D offering is exempt from registration requirements, but it is not exempt from antifraud provisions and other federal securities law obligations. DART
Also, Form D is not a casual upload. Federal rules require that a Form D be filed on EDGAR, and it must be signed by a person duly authorized by the issuer. eCFR+1
That is why obvious inconsistencies in identity fields matter. They are part of the public record and tied to an authorized signature.
If your goal is to document and escalate responsibly (without overstating what the anomaly “means”), here is a clean approach:
A Form D is supposed to be a straightforward compliance notice. When it contains a bizarre, repeated naming alteration like “everyone’s middle name is Ryan,” it is a signal. Not a verdict, a signal. It tells you the filing deserves verification, comparison, and careful documentation, because public records are only useful if they are accurate.
These Form D filings read like multiple separate offerings over time, with shifting security structures, minimum investment thresholds, and reported use of proceeds, but the internal inconsistencies, especially the 2018 and 2019 $13M filings where ‘amount sold’ and commission related lines do not track chronologically, warrant reconciliation against the underlying offering documents.
“The prudent man always studies seriously and earnestly to understand whatever he professes to understand, and not merely to persuade other people that he understands it; and though his talents may not always be very brilliant, they are always perfectly genuine.
He neither endeavours to impose upon you by the cunning devices of an artful impostor, nor by the arrogant airs of an assuming pedant, nor by the confident assertions of a superficial and imprudent pretender.
He is not ostentatious even of the abilities which he really possesses. His conversation is simple and modest, and he is averse to all the quackish arts by which other people so frequently thrust themselves into public notice and reputation.”
― Adam Smith, The Theory of Moral Sentiments
Deconstructing Adam Smith, 2025
Photography: Jacqueline P. Ashby
In my recent MSc dissertation at the University of Oxford, I explored how medical students experience and perceive artificial intelligence in their learning environment. One thing struck me in their comments: it’s not just AI, it’s the looming feeling of being watched, and not always knowing by whom.
During my integrative literature review, I learned that AI may impact one’s psychological safety as they tended to patients and interacted with colleagues.
As more AI infused tools promise “continuous data” on performance, students described how AI in the hospital setting could become part of their performance evaluation. And not just end of rotation feedback, but a kind of 24/7 visibility. Just the idea of being continuously monitored was enough for students to express that this caused them stress and anxiety.
What stood out to me is that students were not opposed to AI. Many are excited, and eager to learn more about how the technology will be integrated into medicine. What they were asking for was something more basic: transparency, consent, and clear boundaries.
As we adopt AI into medical education, we need to design and integrate for psychological safety. Otherwise, we risk teaching the next generation of physicians to perform for the system, rather than to think with and for their patients. It reminds me of Mayo’s Hawthorne effect and the potential for AI use as a surveillance tool and form of manipulation to boost productivity.
Over the next while I’ll be sharing a few short reflections from this research, paired with my own photography, as a way to keep this conversation human, creative, and thoughtful.
In hindsight, 2025
Photographer: Jacqueline P. Ashby
Kelvingrove Art Gallery and Museum