Is This AI?
February 7, 2026
Trump-l'oeil and the Liar's Dividend
TLDR:
The creators of the Parthenon manipulated their marble media to please, to give viewers a feeling that properly reflected their world view of awe-inspiring symmetry, fair and balanced.
The facts are: the columns bulge in the middle, curve upwards at the base, and tilt inward.
But for over two thousand years, the feeling has been what mattered more, and that is the feeling of straight, classic beauty, up and down the lines.
Uncertainty is real. It has a long history, entangled with perception.
And yet our current slop-fueled disinformation crisis feels like it has fundamentally broken our baseline sense of reality in a new way.
The solution to this crisis, though, shouldn't be to reject AI technology outright, in the same way it would have been a shame to tear down the Parthenon after learning there was a gap between the feeling it evoked and the facts of its construction. Similarly it would have been a shame to ban the use of marble.
Being "against AI" is like being "against" a medium, material or, a technology, like, being against, on a simple machine level, the screw.
But, in the same way we can be "pro-screw" and yet "anti-getting-screwed," we can be "pro-AI" while at the same time "anti-getting-screwed-by-AI."
We can enjoy the Parthenon despite knowing it's been manipulated.
What?
Something remarkable happened in South Georgia a few weeks ago: it snowed.
From my hotel window at 6AM, I took a slo-mo video of the first flakes falling on Americus. I posted it to the Georgia subreddit, thinking it would be a nice timeline cleanse from the horrific ICE videos and less horrific but still exhausting arguments about NIL deals in college sports normally found there.
But the response wasn't, "How lovely!" or "Stay safe out there!"
Nope.
It was a cascade of comments asking, "Is this AI?" saying, "Looks like AI," and then analyzing the frame rate, citing the video "tells," and then declaring definitively that it was AI and, more troubling, that I was a bot.
Stunned, I posted a "verification" pic of me with a snowman I had made out of the fresh powder and, as an ultra-authentic touch, Buc-ees beef jerky arms.
See? Could a clanker do this?
But the moderators were not assuaged. They removed that post as being AI, and, jokesters that they are, left the original post up.
So What?
After getting over my initial shock, I felt weirdly heartened about the whole thing.
Detective Reddit had rightfully pointed out that my account often posts Daily InstrumIntel stories in various subs, many of which focus on AI.
And Detective Reddit was justified, since no one has time to waste on slop, to use that doubt to declare me a clanker and suggest the sub move on to more trusted sources for their snow vids.
Why should Detective Reddit trust me, anyway? I didn't post on that sub often, and, honestly, Frosty the Buc-ees Beef Jerky Snow Man vouching for me probably didn't help things.
"Good for him," I murmured to Frosty the Buc-ees Beef Jerky Snow Man as he slowly disintegrated. "Detective Reddit shouldn't have let me get away with the Liar's Dividend."
Frosty did not respond, but I sensed he wanted to know more about the Liar's Dividend.
"If everything is potentially a lie," I said, "then a person caught in a lie no longer has to prove they are innocent; they only have to point to the evidence and ask, for example, 'Is that AI?' Because the public is already primed for skepticism, that claim is often enough to create a 'dividend' of doubt that protects the liar. Make sense?"
Alas, I found that I was speaking only to a puddle.
BUT! The point was, the MAGA movement's manipulation of media, the Democratic party's bad faith Biden campaign, and the broader flood of AI "slop" have all trained us to doubt everything we see.
Because, it seems, people in power, regardless of party, lie about everything. And they have since 440 B.C.E.
So doubting me and Frosty the Buc-ees Beef Jerky Snow Man is, for now, actually a good sign. Distrust but verify!
But, as we go forward, we need to protect against a public that views everything with not just the jaundiced eye of criticism, but the pink-eye of the MAGA nihilist.
For the pink-eyed MAGA nihilist, nothing is true, and everything is possible on one side; Everything is true, and nothing is possible on the other.
When President Trump posts manipulated images of leaders like Maduro being extradited, or of activists like Nekima Levy Armstrong being arrested, it creates a spillover effect where the public treats an innocuous video of something like snow falling with the same skepticism they treat a video of Biden speaking coherently shared by the DNC.
When we can no longer agree that it is snowing outside, we can agree that we don't have a basic level of trust with traditional information brokers. And the abundance of "fake" content provides a strategic benefit to those who want to dismiss "real" evidence of their own misconduct.
That, Frosty the Buc-ees Beef Jerky Snow Man, is the Liar's Dividend.
The MAGA slop posters, as well as the bad faith posters on the left, aren't just lying; they are training the public to view all video evidence as more of a Rorschach test than a sufficiency-of-evidence test.
For example, as Parker Malloy points out, the "Verified video of Alex Pretti [was] dismissed as fake by people who think they're fighting misinformation. Meanwhile, AI-generated content spreads as proof."
Which, as Ben Smith says, quoting Nassim Nicholas Taleb on the terminal state of this doubt, "not only can we be confident that there is a conspiracy but, worse, we can also be confident that those who claim that there is no conspiracy are part of the conspiracy."
Nothing is true, everything is possible; everything is true, nothing is possible.
But this doesn't have to be a death spiral. It might be an opportunity to reset our perception politics in a good way. How?
By rewarding and lifting up responsible and innovative AI users and, at the same time, calling out abusers. If we do that, we might just build a more responsive, and responsible, media ecosystem.
Now What?
Jesse Singal argues that ubiquitous video isn't making us smarter, because the fear of manipulation has turned every piece of evidence into a tribal weapon, rather than a shared fact.
Feelings matter. Facts don't. If it feels true, then it's true.
The solution to this vortex, though, isn't to ban AI or banish forever anyone who uses AI. That puts us all in the "anti-screw" camp of disempowered backwater tinkerers.
Instead, we could (maybe?!) use this godforsaken inflection point to actually build a distributed, accountable media ecosystem.
We have the foundation already. We have trusted interlocutors who have skin in the game, like the researchers at Bellingcat, Wired, ProPublica, Notus, and 404 Media, as well as writers like Parker Malloy, Ryan Broderick, and Chris Mills Rodrigo.
They are all staking their professional lives on their ability to sort signal from noise in this disinformation hellscape. And their readers are helping build this federated system, too, by holding them accountable, on community forums like Discord and, yes, Reddit.
By building these distributed, various networks of verification, where trust is based on a history of responsible behavior and "Implementation Experience" rather than just a legacy reputation of expertise, we can rebuild the media ecosystem around mutual accountability, so we no longer have to "Stand with CNN" or turn Jimmy Kimmel, charming as he is, into Rosa Parks.
We could actually trust what we see in front of our faces again.
*
The explosion of subscription-model media has its downsides, for sure, but it has contributed to this opportunity to hold media and creators accountable.
Outlets like the New York Times now rely as much on subscribers as they do on advertisers for their revenue. This has already shifted power away from corporate heavies toward loyal readers. So now threatening to pull your subscription, especially if you're one of many such threatening subscribers, can carry as much weight as Montgomery Ward threatening to pull their Sunday ads used to back in the Powers That Be days.
The same is true for Substack audiences and content creator followers. These newsfluencers (sorry!) are beholden to their audiences, generally, which includes, specifically, you.
So your job in this ecosystem is to call them out when they screw up.
Accountable media requires an active audience, and it only works if "unsubscribe" is a credible threat. But use it wisely!
It's important to keep in mind that the road through the slop is going to be messy, and banishing anyone who gets it wrong once from the information ecosystem isn't going to help in the long-run. Reserve the tar and feathers for bad faith repeat offenders, like Trump and the sons of Project Veritas.
In a world of infinite state-sponsored slop, we need a community of human filters. We need . . . Media Bivalves!!
(Hmmm. I'll workshop that one.)
Anyways, we need trusted interlocutors who have built equity within a community. These are the people who can say, "I know this person. I saw this snow. You can trust me. It's real."
And, perhaps, in time, if I build up enough equity within the sub, the moderators can also admit that Frosty the Buc-ees Beef Jerky Snow Man (RIP) was as real as he could be.
*NB: All typos are because my frontal lobe has been burnt out.
