Ryan

If you have ever looked through SEC filings on EDGAR, you know most of it is dry, structured, and painfully consistent. That is why a Form D where multiple people’s middle names appear to have been changed to “Ryan” jumps off the page. It is not “proof” of anything on its own, I’m sure co-founder of Revolutionary Clinics, Gregory Ryan Ansin would agree. But it is absolutely the kind of anomaly that warrants a closer look. C D Services of America, LLC sat “above” Revolutionary Clinics (at least the entity called Revolutionary Clinics II, Inc.) as an ownership and financing vehicle. When Revolutionary Clinics went into financial distress and litigation (recent reporting): reporting on the receivership and lawsuits describes Revolutionary Clinics and affiliates including Revolutionary Growers and C D Services of America as being involved together in debt, lender actions, or rent disputes.

First, what is SEC Form D?

Form D is a notice filing, not a full registration statement. Companies use it to notify the SEC that they sold securities without registering the offering, typically relying on an exemption under Regulation D (Rule 504 or Rule 506) or certain statutory exemptions.

Form D filings are made electronically on the SEC’s EDGAR system, and after filing, they become publicly available.

Investor.gov describes Form D as a “brief notice” that typically includes identifying details like the names and addresses of key people associated with the issuer, plus basic information about the offering. investor.gov

Why “all middle names changed to Ryan” is unusual

In a normal Form D, names are not a creative writing exercise. They are identifiers. When you see a pattern like many middle names being replaced with the same word, it raises practical questions such as:

  • Data integrity: Was the filing prepared carelessly, with auto fill, copy paste, or template errors?
  • Internal governance: Who reviewed the filing before submission, and what controls failed?
  • Identity clarity: If names are altered or inconsistent, it becomes harder for investors, regulators, journalists, or counterparties to accurately match individuals across filings and records.

There are innocent explanations (sloppy drafting, formatting bugs, or a misguided “joke” that made it into a legal filing). There are also more concerning explanations (attempts to confuse, obscure, or mislead). A single filing does not let you conclude motive, but it does justify scrutiny.

Why Form D details matter even though it is “exempt”

A Regulation D offering is exempt from registration requirements, but it is not exempt from antifraud provisions and other federal securities law obligations. DART

Also, Form D is not a casual upload. Federal rules require that a Form D be filed on EDGAR, and it must be signed by a person duly authorized by the issuer. eCFR+1

That is why obvious inconsistencies in identity fields matter. They are part of the public record and tied to an authorized signature.

If you spot something like this, what can you do?

If your goal is to document and escalate responsibly (without overstating what the anomaly “means”), here is a clean approach:

  1. Pull the filing directly from EDGAR and save a copy.
  2. Check for amendments and compare name fields across versions (sometimes errors get corrected, sometimes they persist).
  3. Cross reference the names against other filings, corporate registries, or court records to see whether it is an isolated mistake or a repeated pattern.
  4. Preserve your evidence trail: note the filing date, accession number, and screenshots of the relevant fields.
  5. Report concerns if appropriate: the SEC provides a portal to submit a tip or complaint (including possible securities law violations).

    If you are submitting as a whistleblower seeking eligibility under the SEC Whistleblower Program, the SEC outlines how to submit through its TCR process.

Bottom line

A Form D is supposed to be a straightforward compliance notice. When it contains a bizarre, repeated naming alteration like “everyone’s middle name is Ryan,” it is a signal. Not a verdict, a signal. It tells you the filing deserves verification, comparison, and careful documentation, because public records are only useful if they are accurate.

Side by side comparison across C D Services of AMERICA filings

2016-09-01 (Form D)

  • Securities offered: Equity
  • Total offering amount: Indefinite
  • Total sold: $5,600,000
  • Investors: 40
  • Sales commissions: $0
  • Payments to execs, directors, promoters (use of proceeds): $250,000

2016-10-17 (Form D)

  • Securities offered: Equity
  • Total offering amount: Indefinite
  • Total sold: $0
  • Investors: 0
  • Sales commissions: $0
  • Payments to execs, directors, promoters (use of proceeds): $250,000

2017-05-08 (Form D)

  • Securities offered: Equity
  • Total offering amount: $6,000,000
  • Total sold: $575,000
  • Investors: 6
  • Sales commissions: $0
  • Payments to execs, directors, promoters (use of proceeds): $250,000

2018-02-01 (Form D)

  • Securities offered: Equity, Debt, Option/Warrant, plus “Other: Secured Convertible Debt”
  • Total offering amount: $18,000,000
  • Total sold: $925,000
  • Investors: 0
  • Sales commissions: $0
  • Payments to execs, directors, promoters (use of proceeds): $360,000

2018-11-28 (Form D)

  • Securities offered: Equity
  • Total offering amount: $13,000,000
  • Total sold: $11,575,000
  • Investors: 36
  • Sales commissions: $0
  • Payments to execs, directors, promoters (use of proceeds): $0

2019-07-30 (Form D)

  • Securities offered: Equity
  • Total offering amount: $13,000,000
  • Total sold: $8,759,600
  • Investors: 23
  • Sales commissions: $250,000
  • Payments to execs, directors, promoters (use of proceeds): $250,000

These Form D filings read like multiple separate offerings over time, with shifting security structures, minimum investment thresholds, and reported use of proceeds, but the internal inconsistencies, especially the 2018 and 2019 $13M filings where ‘amount sold’ and commission related lines do not track chronologically, warrant reconciliation against the underlying offering documents.

Sentiments

“The prudent man always studies seriously and earnestly to understand whatever he professes to understand, and not merely to persuade other people that he understands it; and though his talents may not always be very brilliant, they are always perfectly genuine.

He neither endeavours to impose upon you by the cunning devices of an artful impostor, nor by the arrogant airs of an assuming pedant, nor by the confident assertions of a superficial and imprudent pretender.

He is not ostentatious even of the abilities which he really possesses. His conversation is simple and modest, and he is averse to all the quackish arts by which other people so frequently thrust themselves into public notice and reputation.”

― Adam Smith, The Theory of Moral Sentiments

Deconstructing Adam Smith, 2025
Photography: Jacqueline P. Ashby

Hindsight

In my recent MSc dissertation at the University of Oxford, I explored how medical students experience and perceive artificial intelligence in their learning environment. One thing struck me in their comments: it’s not just AI, it’s the looming feeling of being watched, and not always knowing by whom.

During my integrative literature review, I learned that AI may impact one’s psychological safety as they tended to patients and interacted with colleagues.

As more AI infused tools promise “continuous data” on performance, students described how AI in the hospital setting could become part of their performance evaluation. And not just end of rotation feedback, but a kind of 24/7 visibility. Just the idea of being continuously monitored was enough for students to express that this caused them stress and anxiety.

What stood out to me is that students were not opposed to AI. Many are excited, and eager to learn more about how the technology will be integrated into medicine. What they were asking for was something more basic: transparency, consent, and clear boundaries.

As we adopt AI into medical education, we need to design and integrate for psychological safety. Otherwise, we risk teaching the next generation of physicians to perform for the system, rather than to think with and for their patients. It reminds me of Mayo’s Hawthorne effect and the potential for AI use as a surveillance tool and form of manipulation to boost productivity.

Over the next while I’ll be sharing a few short reflections from this research, paired with my own photography, as a way to keep this conversation human, creative, and thoughtful.

In hindsight, 2025
Photographer: Jacqueline P. Ashby
Kelvingrove Art Gallery and Museum

Roblox

“This is not about a minor lapse in safety, it’s about a company that gives pedophiles powerful tools to prey on innocent and unsuspecting kids. The trauma that results is horrific, from grooming, to exploitation, to actual assault. In this case, a child lost her life. This needs to stop.” ~ Alexandra Walsh, Partner at Anapol Weiss via Anapol Weiss

Roblox looks like digital LEGO, but the risks are now big enough that attorneys general, researchers, and child protection advocates are sounding alarms.

Media segment clip via WLBT.

Investigators using child avatars have repeatedly found sexualised content, grooming behavior, and harassment inside Roblox experiences, even with safety tools turned on (Revealing Reality, reported in The Guardian, 2025). The report also found the avatar belonging to the 10-year-old’s account could access ‘highly suggestive environments’ and another “test avatar registered to an adult was able to ask for the five-year-old test avatar’s Snapchat details using barely coded language”

Screen shows the user’s correspondence pressuring the victim to complete a series of self-harming challenges. Note: “Messages are from ‘anonymous’ in Shapchat.” Media segment clip via WLBT.

Parents and several US states have sued Roblox for safety issues and making it too easy for predators to contact children (Kentucky Attorney General, 2025; Louisiana Attorney General, 2025; Texas Attorney General, 2025). A single plaintiffs’ firm (Anapol Weiss) reports it has filed 12wrongful-death suits against Roblox, one explicitly involving a 13-year-old girl’s suicide after alleged extremist grooming; other suits involve different forms of exploitation. NSPCC and other child protection groups now list Roblox alongside social media when they brief parents about online risk (NSPCC, 2022).

So this is no longer a niche concern. For clinicians and parents, Roblox belongs in routine conversations about mood, sleep, and safety.

The warning signs and suggestions below are adapted from WHO and APA criteria for problematic gaming, systematic reviews on cyberbullying and adolescent mental health, and media-use guidance from the American Academy of Pediatrics, Canadian Paediatric Society, and NSPCC.

Children: Changes in Behaviour & Stress

These behavioural changes may appear in a child who is heavily engaged on gaming platforms:

  • More irritable or tearful, especially when asked to log off
  • Staying up late to play, trouble falling asleep, or nightmares
  • Slipping grades, incomplete homework, loss of interest in offline activities
  • Withdrawing from in person friends, relying mainly on gaming “friends”
  • Hiding screens, constant headphones, refusal to play in shared spaces

Learn more in these symptoms via the American Psychiatric Association, via the WHO, and via The Canadian Centre for Child Protection.

Ideas for Parents & Guardians

  1. Play once, then set rules together. Ask your child to show you their favourite games and who they play with.
  2. Use the safety tools. Turn on parental controls, restrict chat and friend requests, limit spending, and keep devices in shared spaces when possible (Roblox Trust and Safety, 2025; NSPCC, 2022).
  3. Anchor Roblox inside a family media plan. Protect time for sleep, schoolwork, exercise, and offline friends. The American Academy of Pediatrics has simple family media plan tools that can be adapted to any home you can access here.
  4. Make disclosure safe. Explain to your child that If something weird or scary happens on their gaming platform to inform you because this information can help you keep them safe.

Three Questions Every Trainee Can Ask

You can integrate a digital media use conversation into a psychosocial history in under a minute:

  1. “What games or apps do you use most, is Roblox one of them?”
  2. “Who do you usually play with, people you know in real life or mostly people you only know online”
  3. “Has anyone ever said or done something while playing the game that felt uncomfortable or scary”

A “yes” to that third question is your signal to slow down, explore, document, and involve safeguarding if needed.

It’s important to understand that these platforms, such as Roblox, are social environments that can shape a child’s mood, sleep, sense of safety, and self-worth. As the NSPCC has highlighted, many parents underestimate what actually happens in these online spaces, while children often struggle to talk about what they see and experience. Our job, as clinicians and caregivers, is to stay curious, ask specific questions about gaming, and notice changes in behaviour, sleep, appetite, or school engagement. When we pair open conversations with early mental health support, we provide children a reliable, attuned adult who is watching out for them.

Blues

“The Oxford University Ice Hockey Club (OUIHC) is home to the Men’s and Women’s Blues ice hockey teams of the University of Oxford, England. The Men’s Blues, also known as Oxford University Blues,[2] is one of the world’s oldest ice hockey teams.[3] Tradition places the origin of the team in 1885, when a match is said to have been played against Cambridge University Ice Hockey Club in St Moritz, Switzerland.[4] This date is recognised by the Hockey Hall of Fame, and prior to the 1985 Ice Hockey Varsity Match, the International Ice Hockey Federation formally recognised the 1885 game as the first ice hockey match played in Europe.[5] However, there is no contemporary evidence that this match took place, and Oxford now claims that this was a bandy match.[6]” ~ Wikipedia

Go Blues!!

In potentia, omnis possibilitas floret

Love is not all: it is not meat nor drink
Nor slumber nor a roof against the rain;
Nor yet a floating spar to men that sink
And rise and sink and rise and sink again;
Love can not fill the thickened lung with breath,
Nor clean the blood, nor set the fractured bone;
Yet many a man is making friends with death
Even as I speak, for lack of love alone.
It well may be that in a difficult hour,
Pinned down by pain and moaning for release,
Or nagged by want past resolution’s power,
I might be driven to sell your love for peace,
Or trade the memory of this night for food.
It well may be. I do not think I would.

Edna St. Vincent Millay
1892–1950

Worst movie ever. You are right.
Our Oxford Year. And our place. Always amongst the stillness of flowers.

Unseen

“It is not, on the whole, that natural phenomena and entities themselves are disappearing; rather that there are fewer people able to name them, and that once they go unnamed they go to some degree unseen. Language deficit leads to attention deficit. As we further deplete our ability to name, describe and figure particular aspects of our places, our competence for understanding and imagining possible relationships with non-human nature is correspondingly depleted.

The ethno-linguist K. David Harrison bleakly declares that language death means the loss of ‘long-cultivated knowledge that has guided human–environment interaction for millennia … accumulated wisdom and observations of generations of people about the natural world, plants, animals, weather, soil. The loss [is] incalculable, the knowledge mostly unrecoverable.’ Or as Tim Dee neatly puts it, ‘Without a name made in our mouths, an animal or a place struggles to find purchase in our minds or our hearts.”

― Robert McFarlane

Cybersecurity & Family Medicine


When we talk about Artificial Intelligence (AI) in healthcare, our minds often go to diagnostic tools, scribe assistants, or chatbot-based triage systems. But there’s another sector that has been living with AI’s risks and rewards for much longer: cybersecurity. Recently, I attended a lecture by cybersecurity strategist Dr. Craig Jarvis on the growing use of AI in digital defence. His insights translate remarkably well to our clinical context because, at its core, both cybersecurity and healthcare depend on trust, accuracy, and human oversight.

Let’s look at some of the lessons medicine can borrow from AI in cyber defence.


1. AI Can Help but Only If Humans Stay in the Loop

In cybersecurity, automated systems monitor threats, detect anomalies, and even block attacks. But when something unexpected happens, a human expert still needs to interpret, intervene, and decide.

In clinical practice, the same applies. AI tools can summarize patient notes, flag abnormal results, or even draft assessments — but they don’t understand context, patient nuance, or social determinants. We must always ensure the clinician stays in the loop.

“Speed is valuable, but not at the expense of human control.”


2. Reduce Toil, Not Thinking

In IT, AI is praised for reducing “toil” or the repetitive, low-value work that consumes time and mental energy. In medicine, the same promise is appealing: fewer administrative burdens, quicker charting, streamlined information retrieval.

But the key is toil reduction without cognitive erosion. If AI saves time, that time should be redirected toward deeper clinical reasoning, patient connection, or teaching moments and not simply faster throughput.


3. The System Is Only as Safe as Its Weakest Prompt

One cybersecurity slide Dr. Jarvis shared reported that 1 in 80 AI prompts carries a high risk of exposing sensitive enterprise data.

In healthcare, that translates to:

  • Be mindful of what information you input into AI tools.
  • Avoid typing identifiable patient data into any non-approved system.
  • Remember that AI retains patterns and once entered, data may not be fully private.

Clinical AI safety begins with data awareness at the prompt level.


4. Diversity Matters: No Single AI Does It All

Cybersecurity systems rely on multiple forms of AI:

  • Machine learning to detect patterns
  • Generative AI to summarize or report
  • Agentic AI to automate tasks and responses

In family medicine, this diversity principle also holds. One model may excel at summarizing notes, another at generating patient education materials, and a third at supporting evidence retrieval. Integrating these tools thoughtfully ensures resilience and balance, not dependence on a single system.


5. Guard Against “Inflated Expectations”

Dr. Jarvis highlighted the case of Cylance, a once-hyped AI cybersecurity company valued at over $1 billion later sold at a massive loss when its promise outpaced its performance.

In healthcare, inflated expectations can be equally dangerous. AI is powerful, but it’s not a replacement for judgment, empathy, or context. Adopting AI responsibly means piloting, evaluating, and refining tools before scaling much like any new clinical guideline.


Bringing It Back to Practice

If we think like cybersecurity professionals, we can reframe how we approach AI in medicine:

Cybersecurity PrincipleClinical Analogue
Human-in-the-loop oversightClinician supervision of AI recommendations
Patch managementRegularly update clinical AI tools and policies
Threat detectionIdentify AI misuse, bias, or data leakage
Governance frameworksClear clinical and ethical accountability

The Bottom Line

AI can help us practice smarter, not just faster, but only if we approach it with the same discipline, skepticism, and care that cybersecurity experts apply to digital defence.

As family physicians and educators, our role is to ensure that AI augments, not replaces, the human connection at the heart of care.

Trust the technology, but verify the outcome and always with compassion and clinical judgment.