
The other day, I found myself flat on my back in a urologist’s exam room, eyes fixed on the ceiling tiles as a cystoscope made its slow, deliberate circuit.
Related: Click-baiters are having an AI field day
Dr. Mitchell narrated in sparse, clinical phrases—efficient, not unkind, but clearly on a clock. I listened closely, reporter mode engaged, trying to absorb his key pronouncements without the benefit of pen or notebook. A string of follow-up questions began forming in my head—but not fast enough to catch him before he was on to his next patient.
By the time I crossed the parking lot and shut the door of my car, those questions had crystallized. Instead of starting the engine, I reached for my Samsung S23 and summoned my AI assistant. Over the next ten minutes, I retraced the doctor’s words and filled in the blanks. With help from ChatGPT and Gemini, I was able to unpack the terminology, contextualize the doc’s diagnosis, and fully grasp the options he laid out —all within 10 minutes or so.
It was a novel experience — but also emblematic of a broader shift. Increasingly, human medicine is delivered in shorthand: quick impressions, sparse explanations, little time for dialogue. Into that gap steps AI — now readily accessible to anyone with a smartphone. These tools don’t just explain what the doctor meant; they equip patients to ask better questions, grasp tradeoffs, and take a more active role in managing their own care.
That shift didn’t happen overnight. For much of the 20th century, medicine was a high-touch profession: doctors made house calls, answered questions, and built long-term relationships. But over time, cost-cutting, specialization, and liability fears reshaped the system. Efficiency replaced familiarity.
Today’s healthcare system is designed for speed. Patients are often left with a partial understanding, expected to piece together the full picture on their own. Sitting in the parking lot of my urologist’s office, I experienced this firsthand: a brief, competent exchange with a busy doctor — followed by a deeper, more complete conversation with an AI. The machine wasn’t replacing my physician, but it was stepping in to do something he no longer had the time to do.
What just happened?
What, exactly, had I just done?
In the space of ten minutes, I used a large-language model — first ChatGPT, then Gemini — to reconstruct and clarify a real-world medical encounter. I retraced my urologist’s phrases, one by one, then prompted for context. What is a “pressure test,” really? Why does the bladder need to be refilled? What outcomes does this procedure rule out? Which risks are routine, and which warrant concern?
Each time, the answers came back in clear, neutral language. They felt both plausible and specific. More important, they lined up — not perfectly, but closely — across multiple sources. That convergence gave me confidence I understood what had taken place inside the exam room.
For me, as a longtime reporter, this was less about medical curiosity than about trust mechanics. Where exactly did this sense of understanding come from? Why did the model’s answers feel so grounded, even though they weren’t drawn from my chart — or any single source?
AI response feels authoritative
The answer lies in how these systems are trained. Models like GPT-4 and Gemini don’t consult a physician or query a database. They pattern-match across the vast corpus they were built on: medical journals, institutional websites, textbooks, and curated online discussions. They detect the linguistic common denominators — how expert voices tend to explain a concept — and they generate new sentences that resemble those patterns.
That resemblance is powerful. It mimics the pacing, caution, and phrasing of genuine medical advice. And it’s that stylistic accuracy — not source citations — that convinces the reader they’re hearing something trustworthy.
Put another way: the model isn’t citing expertise. It’s emulating it.
Behind the scenes, filters do exist. Google and OpenAI deploy safeguards to avoid the worst-case scenarios: harmful recommendations, pseudoscience, misinformation. These filters lean heavily on public health authorities like the CDC, WHO, and Mayo Clinic. The training process itself gives higher weight to content that mimics institutional credibility.
The result is an AI response that lands squarely in the mainstream — not because it knows what’s true, but because that’s where the statistical center of gravity resides.
Translation vs. prescription
This carries both reassurance and risk. Most of the time, staying close to the consensus is a good thing. But it also means that alternative views — preventative strategies, integrative approaches, or culturally distinct medical practices — may be minimized or presented through a lens of skepticism.
That doesn’t make the tool useless. But it requires the human user to fully grasp how far it can be trusted.
At its best, a well-tuned AI model serves as a translator. It helps ordinary people turn professional shorthand into full sentences. It lays out definitions, contextualizes terms, and points to logical next steps. It doesn’t prescribe. It clarifies.
Used that way — as a first-pass interpreter — the tool can be remarkably empowering. Especially for patients navigating time-constrained appointments, unfamiliar procedures, or conditions they’ve only just learned how to spell.
Human vetting a must
I can’t emphasize strongly enough that the tool is only as good as the user. And here’s where decades of journalism proved their worth.
I didn’t stop with one AI response. I cross-checked answers from two models. I scanned institutional websites. I triangulated tone, not just content. I looked for hedges, caveats, and inconsistencies. And in doing so, I began to build confidence — not just in the tool, but in my own ability to make sense of it.
That practice — testing, comparing, contextualizing — isn’t new. It’s the foundation of good reporting. And it may become a new baseline skill for navigating all kinds of AI-assisted decisions, from medical procedures to financial planning.
What I experienced in that parking lot wasn’t a miracle. It wasn’t a replacement for my doctor. And it wasn’t the start of some dystopian future where machines diagnose and humans obey.
It was something quieter, more useful: a companionable interface that extended the conversation, answered questions the human didn’t have time for, and helped me walk away from a confusing encounter feeling clearer, calmer, and more in control.
That’s not artificial intelligence. That’s just plain help.
So where do we go from here? I’ll keep watch, and keep reporting.

Acohido
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(Editor’s note: I used ChatGPT-4o to accelerate and refine research, assist in distilling complex observations, and serve as a tightly controlled drafting instrument, applied iteratively under my direction. The analysis, conclusions, and the final wordsmithing of the published text are entirely my own.)
October 29th, 2025 | My Take | Top Stories