Earlier this year, my YouTube feed began filling up with provocative videos dressed up to sound authoritative but somehow off.
A gravelly narrator whispered Shaolin monk longevity secrets over looping monastery footage. Another voice warned, “Tesla’s death was just the beginning,” paired with flickering photos and AI-aged portraits. A jittery explainer dissected a government hearing as if it were a soap opera.
What’s showing up now in all of our feeds isn’t low-effort clickbait but genre-wrapped storytelling that mimics human authority.
The genre sets the mood, the AI does the grinding, and the result is a synthetic confidence that happens to be frictionless to produce, infinite in supply, and hard to look away from. What makes it seductive is the polish — tight scripts, vivid detail, and pacing that lends these exaggerated, conflated narratives the aura of truth, even when stitched from selective facts or speculative leaps.
It took me a while to recognize what I was seeing: the automation of persona automation. This is a new type of algorithmically generated content wrapped in the skin of human authority, empathy, and cultural familiarity.
GenAI isn’t just speeding up how content gets made. It’s simulating the presence of a trustworthy narrator — someone who seems to know, to care, to mean what they say. But that presence is synthetic. It’s there not to inform or enlighten, but to trigger engagement, reinforce bias, or feed the feed. And it’s reshaping what we see, believe, and trust — before most of us even realize what’s happening.
Mimicry gone wild
But for those of us working in good faith, there’s another path. The same GenAI tools being used to mimic presence can also be used to deepen it.
Writers, editors, storytellers — we’re learning how to use these tools for something else entirely. Not to fake presence, but to extend it. For those with lived experience and deep domain knowledge, GenAI can surface insights, help shape context, and draw connective lines across time. I’ve found that it lets you pull from memory, from craft, from years of observing how systems behave — and thread that into something timely, vivid, and useful.
It’s also an exceptional line editor — steeped in decades of style guides, genre conventions, and professional norms. Whether you’re writing a screenplay, a legal brief, a news feature, a research abstract, a lyric poem, or a product launch email, GenAI can mimic the instincts of a seasoned editor — adjusting tone, structure, and cadence to suit the form.
Used this way, GenAI isn’t a shortcut — it’s a multiplier. It helps a single journalist cover more ground, a small team tell bigger stories, a niche expert connect dots across domains. It doesn’t replace insight or judgment — it revives and extends them. For those with deep experience, it’s a way to reengage, applying hard-won knowledge to unfolding events in real time. And for those without a platform, it amplifies streetwise intuition, lived context, and underheard perspectives — giving more people a way to speak clearly, credibly, and usefully into the fray.
A veneer of humanity, at scale
We’ve had spammy blogs and clickbait farms before. But this is different. Today’s synthetic media doesn’t just churn — it performs. It wears the face of concern, the voice of wisdom, the posture of authority.
What’s rising now is a flood of emotionally calibrated content that mimics the gestures of knowing and caring. The tone is intimate. The framing is familiar. The production quality is high.
But behind it all are the same operators who built empires on SEO clickbait, content farms, and engagement hacks. They were the first to seize on GenAI — folding it into their old playbooks, swapping out freelancers for machines that can churn out polished scripts, synthetic voices, and stock-footage mashups in minutes.
The mechanics haven’t changed: flood the zone, capture attention, game the algorithms. What’s changed is the speed, the scale, and the eerily convincing human veneer layered on top.
This isn’t AI amplifying human insight. It’s AI simulating the appearance of insight — at scale. The goal isn’t clarity or connection. It’s traction. These systems aren’t helping people say what they mean. They’re optimizing for what performs, whether or not it’s grounded in truth, experience, or intent.
What is real?
The challenge is, the corrosive wave arrived first — and it’s the one gaining speed. Persona automation fits neatly into the existing model: maximize engagement, harvest attention, monetize the feed. It’s no accident that the same tech giants racing to own the GenAI rails are the ones whose platforms reward performance over presence. The incentive structure favors frictionless output, not thoughtful craft.
That momentum threatens to drown out the more promising path. One where these tools amplify human insight instead of simulating it. Where hard-won expertise and lived experience are carried forward — not buried under a tide of synthetic authority. Where common sense and quiet knowledge get a microphone, not just those who know how to game the feed.
Whether that path gains traction — and who gets to walk it — is the question that matters most. The answer will shape not just what we read or watch, but what we come to trust as real.
I say this as someone who is using GenAI every day. I’m a journalist with decades of boots-on-the-ground reporting under my belt. I’ve interviewed hackers, regulators, and CISOs. I now work alongside an AI assistant to accelerate and deepen my editorial process—but always as a force multiplier for my own voice, my own judgment, and my own values.
There is a real difference between using AI to sharpen human-centered storytelling and outsourcing the humanity altogether.
And right now, that distinction is getting harder to spot—and easier to ignore.
Why it matters
Cultural critics like Ethan Mollick and writers like Ed Zitron and Charlie Warzel have begun warning us. Mollick calls it “the agentic future.” Zitron has said plainly: GenAI is cheapening what we see. Others have written about the “enshittification” of the web—the way everything is slowly degrading into content sludge, optimized for clicks, not value.
What unites their concerns is this: we are saturating the public square with synthetic emotion and fabricated wisdom, all delivered with the tone and pace of human earnestness—but devoid of accountability, nuance, or lived perspective.
There’s no kill switch here. Platforms reward the output. The tools are easy. And audiences, overloaded and fractured, increasingly don’t notice—or care—if it’s real, as long as it sounds true.
So yes, we should be concerned. Not just about plagiarism or misinformation or job loss—though those are real. The deeper risk is subtler: we may lose our shared intuition for what is real, what is grounded in lived human experience, and what is simply a good impersonation.
Once that happens, even the best intentions—transparency, disclosure, ethics—may not be enough. We’ll be swimming in a sea of sincerity simulacra, unsure where the signal is.
That’s why this column, and my broader work, continues to stand for something simple: truth-telling with accountability. There is still time to distinguish between content that serves an audience and content that exploits one. But we must act now, while we can still tell the difference.
Acohido
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(Editor’s note: I used ChatGPT-4o to accelerate and refine research, assist in distilling complex observations, and serve as a tightly controlled drafting instrument, applied iteratively under my direction. The analysis, conclusions, and the final wordsmithing of the published text are entirely my own.)
September 20th, 2025 | Uncategorized