How History Gets Rewritten Without Anyone Noticing
Why Don Lemon's Arrest Matters More Than You Think
I’m sure you’ve been following the hellscape that is America right now and likely feeling a bit overwhelmed. Same. When I woke up and learned that Don Lemon was arrested by federal agents in connection with a protest he was covering at a Minnesota church I was shocked, but not surprised. As I caught up on the details it got me thinking…. about a lot of things.
The arrest being unconstitutional and challenging our first amendment rights led to natural questions about authoritarianism, press freedom, and the vulnerability of independent journalists.
Then I started thinking about what he must have been feeling and thinking in those moments. Which lead to me being curious about his truth, the details of what happened from his point of view, and what he might choose to share on his platforms. Much of which is likely history in the making and a historical inflection point that we are all experiencing in real time.
Which led me to think more deeply about something that I feel like a lot of people might be missing:
What if the biggest threat to truth isn’t censorship as we know it? What if it’s AI?
The Detail Everyone’s Missing
As most of us know Don Lemon used to work at CNN and now he runs his own independent media company and publishes on platforms he controls.
Most analysis of his arrest focuses on the obvious: the chilling effect on press freedom, the targeting of journalists, the specter of authoritarianism creeping into democratic spaces. All true. All urgent.
But there’s a second layer that almost no one is naming yet—one that fundamentally changes what platform ownership means in an AI-mediated world.
The fact that Lemon owns his distribution channels isn’t just about reach.
It’s not just about avoiding corporate editorial filters or algorithmic suppression.
It’s about whether his version of what happened will even exist six months from now when AI systems are asked to explain this moment.
Two Understandings of Platform Ownership
There’s what most people understand about owning your platform, and then there’s the part that should keep us all up at night.
The conventional wisdom is straightforward: own your platform so you can’t be silenced. So corporations can’t demonetize you. So governments can’t flip a switch and scatter your audience. In countries with authoritarian regimes, this matters acutely—independent platforms can mean the difference between a story getting out or being completely suppressed.
That’s true. That’s real. We’ve watched it happen.
But the deeper risk—the one I don’t think we’re talking about enough—is that AI doesn’t just distribute information anymore. It re-authors it.
How AI Re-Authors Reality
When a story breaks and mainstream outlets delay their coverage, underreport context, or frame events through a particular institutional lens, that partial version becomes the indexed record. That’s what AI systems scrape when someone asks, “What happened with Don Lemon?” or “What’s happening with journalist arrests in the U.S.?”
Then those AI systems generate summaries, timelines, educational explainers, etc. We’ve all watched how this has become prioritized in our search results. More often than not it’s the actually the first thing that we see after we search for something.
This is where it gets a bit dystopian:
These AI summaries are essentially building background context for future stories.
The stories that our grand- and great grand kids will consume.
And each layer of that generative process moves further from what actually occurred.
This isn’t like traditional editorial bias, where you can identify a slant and seek out alternative sources. This is looks like structural omission at scale.
AI systems don’t ask, “Who was there? Who’s telling the truth? Whose account should be trusted?” They ask: “What’s indexed? What appears authoritative? What’s repeated across multiple credible-seeming sources? What’s structured in a way I can retrieve and reference?”
So, if your account of events isn’t in the record; if it’s not machine-readable, searchable, persistent, and cross-referenced—it doesn’t just get ignored by people scrolling through feeds.
It gets erased by the systems that explain the world to people.
Not because any single person decided to suppress it.
But because it never registered as statistically significant in the first place.
I’m calling this algorithmic omission.
It’s when truth disappears not because it’s actively suppressed, but because it never registers. I fear we’re evolving into a world where events don’t simply vanish because they’re denied. They vanish because they weren’t recorded in forms that the machines archiving our world and existence today can find, retrieve, and reference when constructing their understanding of reality.
This is how ignorance, bias, and suppression scales in the AI age.
This isn’t the censorship of our grandparents time. It’s not something that you can see and mobilize against instantly.
In authoritarian countries, you can already see how this dynamic plays out with devastating efficiency.
State media dominates the machine-readable record. Independent journalists are arrested, marginalized, or forced into exile. Their accounts (when they exist at all) live in fragmented spaces: encrypted channels, diaspora publications, ephemeral social media posts that disappear or get buried.
AI systems train on what’s statistically present and authoritative-seeming. And what’s statistically present is the state-approved narrative.
The result is that sanitized version becomes the “neutral” explanation that gets exported globally—not necessarily because AI systems are ideologically biased toward authoritarianism, but because that’s what dominates the indexed data.
When you understand this you can see just how important it is now just to have your “owned” corners of the internet but also how what you put out on third-party platforms trains a broader algorithm of what gets included in the historical record or doesn’t.
The absence of alternative accounts becomes functionally equivalent to their non-existence. This is why the idea that you only need your “owned” channels is untrue. You need both.
You don’t need to think about posting on other platforms for “views,” you need to think about posing on other platforms for inclusion in a broader narrative that is currently being written.
This isn’t a hypothetical. It’s already happening. And it’s not limited to authoritarian regimes.
In the age of AI, what’s not machine-readable becomes statistically irrelevant—regardless of whether you’re operating in a democracy or a dictatorship.
What History Teaches Us About Information Consolidation
This pattern isn’t new. We’ve seen it before.
History has always been written by whoever controlled the recording mechanisms. We just don’t usually see it clearly until enough time has passed to reveal what was left out.
Let’s think about the pre-industrialization era; before large publishing houses consolidated media, before national newspapers, and broadcast networks. You had a fragmented, distributed ecosystem: pamphlets, broadsides, independent printers scattered across cities and towns. Multiple competing accounts of the same events, produced by people with different perspectives and different stakes in how those events were understood. Not too dissimilar to social media and blogging.
That fragmentation was messy. Chaotic, even. But it also meant no single entity could completely control the narrative. Different perspectives survived because they existed in parallel, in independently controlled channels.
Then consolidation happened.
Large publishing houses emerged.
National newspapers became the authoritative record.
Broadcast networks centralized information distribution.
And the historical record started to narrow because fewer entities controlled what got widely recorded, archived, and treated as authoritative.
We can look back now and see entire movements, entire communities, entire perspectives that were systematically underreported or deliberately misframed because they didn’t align with the editorial priorities or ideological frameworks of whoever controlled the major distribution channels.
The early labor movement. Civil rights organizing. Feminist activism. Indigenous resistance. We have fragments of those stories because people kept independent records: diaries, community newspapers, pamphlets, oral histories passed down through generations.
But the dominant historical narrative? That was written by whoever had the biggest megaphone and the most institutional legitimacy.
The Speed Problem
What’s different now is the speed and scale at which this consolidation happens.
Historical narrative consolidation used to unfold over decades. You’d have competing accounts for years before one version became dominant, entered textbooks, or shaped collective memory.
AI compresses that timeline dramatically.
It doesn’t just consolidate narratives over generations; it does it in real-time, automatically, at the moment of query. And it doesn’t just shape what gets amplified in the present. It shapes what future systems will even know to look for when trying to understand what happened.
The absence becomes the outcome.
The outcome becomes the recorded reality.
And that recorded reality becomes what people believe is true.
What Platform Ownership Actually Protects Now
So when we talk about platform ownership in the AI era, we’re not just talking about distribution.
We’re talking about ensuring your version of reality survives inside the systems that explain the world.
We’re talking about making sure that when future AI systems (or future humans querying those systems) try to understand what happened in this moment, your truth, your context, your eyewitness account, your framing is actually there in the record.
This is also why who owns AI companies becomes increasingly critical.
The truth is the systems doing this re-authoring aren’t neutral. They’re shaped by whoever controls them, whoever sets their training parameters, whoever decides what counts as an authoritative source.
And if history teaches us anything, it’s that consolidated control over information systems always leads to partial records being treated as complete ones.
The Practical Question
So what does this mean for anyone trying to document what’s actually happening in the world or even for anyone that might be working on something as simmple as their personal brand?
The question is no longer simply: “Do you have a blog? Are you posting on social media?”
The question is: Is your truth on the record?
Is it machine-readable? Is it durable enough to survive platform changes and algorithm updates? Is it structured in ways that AI systems can actually find it and reference it when they’re building their models of what occurred?
Because if the answer is no—if your truth only exists in ephemeral Instagram stories, in posts that disappear after 24 hours, in platforms you don’t control and that could change their terms of service tomorrow—it might matter to the people who just so happen to see it in the moment.
But it won’t exist for the systems that increasingly explain what happened to millions of people who weren’t there.
Over time, what those systems can’t find becomes functionally equivalent to what didn’t happen.
Not philosophically. Functionally.
Maybe the lesson from history isn’t just that information consolidation has always produced partial records.
Maybe it’s that the periods when we had the most accurate, multifaceted historical records were the periods when information systems were most distributed and independently controlled.
When no single entity could dominate the narrative completely. When multiple versions could survive in parallel. When the record was messy but comprehensive rather than clean but incomplete.
We’re at a strange inflection point now. AI is creating unprecedented consolidation pressure—but the technology itself could theoretically enable unprecedented distribution if we choose to build it that way.
The question is whether we’ll recognize what’s at stake before the window closes.
Still Thinking This Through
I don’t have all the answers here. This is genuinely unfinished thinking, which is why I’m working through it in public.
What does this mean for journalism? For activism? For anyone who witnesses something important and wants to make sure that truth survives beyond their immediate circle?
What does it mean when the systems we’re building to explain the world to us are fundamentally incapable of accounting for what they can’t statistically verify—and when verification depends on having been recorded in specific, machine-legible formats by institutions those systems have been trained to recognize as authoritative?
How do we build information ecosystems that preserve truth without requiring everyone to become their own publisher, archivist, and SEO expert?
These are living questions. I’m genuinely uncertain about some of the implications.
But I know this: we’re living through a fundamental shift in how truth gets constructed, preserved, and transmitted. And most of us are still operating as if the old rules apply.
As if being right is enough. As if witnessing is enough. As if truth has some inherent property that makes it survive independent of the structures designed to record and retrieve it.
It doesn’t.
In the age of AI, absence becomes outcome. And outcome becomes reality.
Let’s keep thinking about what that means—and what we might do about it—together.
This is part of Thinking in Public, where I work through structural shifts in technology, power, and culture as they’re happening as thought experiments we can explore together. If this resonates, please consider subscribing to my YouTube or dropping a comment below. I’d love to hear how you’re thinking about these questions.

This piece names the exact problem I’ve been building toward.
I’m working on a project that treats citations and source provenance as first-class objects—not just answers—specifically to avoid the kind of algorithmic omission you describe.
We’re focused on preserving how claims are sourced and connected—so AI outputs remain tethered to who said what, when, and where, rather than collapsing into decontextualized summaries.
It’s affirming to see such clear alignment around the need to preserve context and provenance as AI increasingly mediates how history is recorded.
I’ve been following your career for a long time. Glad to see you on Substack!