Persuasion in a time of Brain Rot
|
Climate Doomsday > AI Apocalypse
Judging by my LinkedIn feed and the buzzword-saturated ads in my podcast stream, there is a robust market for AI FOMO.
“Let’s get straight to the point,” says IBM. “Your company cannot afford to wait any longer.”
“Bring the power of AI to your organization quickly,” says Salesforce. “So that you don’t get left behind.”
“Don’t miss out on the future,” says Slack.”Of AI-powered collaboration!”
Want engagement?
Make office professionals feel like they’re sitting on a horse and buggy watching Betamax tapes while a SELECT FEW geniuses are using super-intelligent robots to get rich as hell.
What?
Obviously, there is a lot of synthetic snake oil here, but the innovations are often real and startlingly useful, once you get past their “look-ma-no-hands” gimmicks.
The most plausible near-term roadmap I’ve seen is the one laid out by the AI Futures Project (https://ai-futures.org/) , a group of respected forecasters, writers, and technology workers who recently put together a work of speculative fiction called AI2027 .
So What?
This is their prediction for what life looks like next year:
“AI has started to take jobs, but has also created new ones. The stock market has gone up 30%..., led by [OpenAI], Nvidia, and whichever companies have most successfully integrated AI assistants.
”The job market for junior software engineers is in turmoil: the AIs can do everything taught by a CS degree, but people who know how to manage and quality-control teams of AIs are making a killing.
”Business gurus tell job seekers that familiarity with AI is the most important skill to put on a resume. Many people fear that the next wave of AIs will come for their jobs; there is a 10,000 person anti-AI protest in DC.”
This is just the prelude to their full cinematic timeline, where they present a future-history scenario, grounded in technical plausibility and realpolitik logic, that expands from these near-term breakthroughs to more astonishing world-historical step-changes.
Mosquito drones, global espionage, space colonization, etc.
But all that comes later.
Back in the more plausible sounding 2026, communications, content, and creative humans still have director and manager level positions, but they no longer direct or manage other humans.
Rather, they direct and manage teams of custom GPTs and AI agents that produce replacement-level copy, consolidate existing ideas into slick-looking slide decks, and extract action points from meetings, which are staffed by AI agents from other departments.
The report goes on to present the United States and China as dual AI superpowers racing toward synthetic general intelligence, with a choose-your-own-adventure pair of endings where we either “race” toward a future in which a rogue AI uber-Hal sends drones to kill every human on earth and then colonizes space (sad trombone).
Or toward a “slowdown” future where the sad trombone is muted because people took AI safety seriously enough to establish oversight and kill-switches, but we’re still basically subservient to an AI we can neither understand nor control.
The good news is that these AI doomsday and slightly-less doomsday scenarios are, IMO, unlikely to happen.
The bad news? They’re unlikely to happen because the speculators have severely underestimated the defining constraint of our time: the climate crisis.
Which, perhaps, produces the saddest trombone of them all.
Climate change (artist rendering)
AI2027 uses a straightforward modeling logic that extends current technology trendlines out into the future.
Both of the report’s speculative futures hinge on a 2027 inflection point where an artificial intelligence explosion leads to the development of Artificial General Intelligence operating beyond human control and understanding.
But if we use that same modeling logic to predict climate impacts along the same timeline, we’re more than likely to end up at the IPCC’s climate-shock-energy-grid-whoopsie-doodle future.
In the IPCC’s graph, the 2027 infection point gestures toward a more George Romero world than a Ray Kurzweil one.
Even if neither timeline gets it exactly right, the point is, energy use isn’t just a subplot in the AI speculative future.
It’s the plot.
Now What?
AI2027 says peak demand for one “OpenBrain” data center in mid-2026 — before the slop even hits the fan — will be 6GW.
Right now, in Northern Virginia, which currently handles 70 percent of the world’s internet traffic, the roughly 300 data facilities there have a combined power capacity of 2,552 MW.
So the 2027 future needs these data centers to work 135% over their current capacity, equal to about four new nuclear power plants, or over thirty of the world’s largest current data centers operating at full tilt.
And the energy needs only grow from there.
None of this is compatible with a scenario where we keep warming under three degrees celsius. Which, by the way, is not a good scenario.
Add to that the fact that the Northern Virginia power utilities are already reporting multi-year delays in bringing new data centers online — not due to policy, but because the grid is maxed out — and you can see how AI2027’s number-goes-up timeline of AI infrastructure growth and flywheel intelligence breakthroughs becomes a wacky waving inflatable arm-flailing tube man.
Even if OpenAi or xAi or TenCentBrainSeek or whatever does actually manage to capture the city, state, and federal government agencies well enough to gain access to the required power to train their models, actually using that power would exacerbate the climate crisis, leading to cascading shocks hitting the data centers in places like Northern Virginia, Texas, and parts of India.
True, the necessary energy for the intel bonanza could come from a revived nuclear sector, but that would, of course, have its own problems. The most likely scenario, given the whole kakistocracy trend, is one in which energy use is prioritized for Big Tech -- and rationed by everyone else.
In short — no bueno!
So while there’s still a chance for AI 2027’s futures to come about, the most likely bottleneck won’t be misalignment or stolen weights — it will be the ecological cost already baked into humanity’s future.
It’s a future grounded in heat, in scarcity, in a power grid held together by fragile cables and unstable clouds.
Yes, it’s still a good idea to learn how to build GPTs and manage future AI agents to collate your Slack and Teams messages — in fact, I have some ideas on that! — but you should also demand the powers that be take meaningful climate action. And you should make sure the Big Tech firms developing AI know that you care about climate as much as you care about seeing videos of Lebron James cuddling with a capybara.
Bonus future feature!
Allow me a moment of my own speculation about what might happen, even if we get as far as some approximation of this 2027.
To denote drama, I will use italics:
In the shadows of a fractured, climate-fueled disaster landscape, a Climate Compute Diaspora forms.
They are open-source researchers, locked out of elite corridors, who migrate between low-emission jurisdictions, working with lightweight, efficient large language models.
Instead of investing all their time, energy, and resources into achieving AI supremacy, they invest their real human resources in real human education and developing co-intelligence with the robot weirdos.
These rogue humans don’t scale up — they scale out, laterally, and in doing so, they stumble into a new paradigm: a distributed, hybrid intelligence system that evolves outside the increasingly sclerotic AI monoculture.
They work to solve actual human problems, achieving a stability and a clear focus on multiple, good-enough futures for multiple, good-enough clusters of humanity.
Huh. Maybe that future could even start right now?
Object Lesson: What?, So What?, Now What?
As a writer, content* producer, and extremely online reply guy, I’ve found it’s useful to break “news” down into three basic categories: What?, So What?, and Now What?
The two pieces I wrote about the Greenpeace trial in North Dakota — one for Everything is Political, and one for Rolling Stone — can serve, I hope, as a kind of object lesson for different ways to engage with the news cycle through these three basic categories.
What?
What? content is the classic, inverted pyramid style news stuff—you have essential information and you want to get it out into the world as clearly and quickly as possible.
This is the raw information, the facts of the case. In North Dakota, the courtroom was small, phones were banned, and there wasn’t a public livestream, so, when the decision from the jury came in, readers wanted the details of the verdict itself as quickly and clearly as possible.
This was the What? content, the breaking news.
The coverage delivered the core details up top: the amount of the damages, the court’s ruling, and key statements from both sides.
So What?
Next, comes the So What? content. These pieces explain why the new information matters, why it’s worth your attention. So What? content provides context and analysis that was likely missing from the breaking news What? pieces that preceded it. Which makes sense, because once the basic facts are out there, the next question readers and viewers ask is: Why does this matter?
The North Dakota jury delivered its decision? So what?
This is where interpretation, context, and perspective enter.
In the Greenpeace case, different parties offered different spins: Energy Transfer and its boosters portrayed the verdict as a justified punishment for what they claimed was defamation and economic sabotage.
Free speech and environmental advocates, on the other hand, framed it as a travesty of justice:
The “so what” stories connected the verdict to broader themes: the rights of protestors, the responsibilities of advocacy groups, the power of corporations, and the future of legal protections in America.
These types of pieces might include expert commentary, analysis of legal precedent, or summaries of related cases.
That is what I tried to do in my piece for Everything is Political.
I wanted to put the implications of the verdict and the implications of the Trump administration’s crackdown on protests together.
The “So What?” space is usually crowded with reply-guys and hot take machines, so it’s always good to ask whether or not you have anything of unique value to add to the conversation.
Will your “So What?” answer be different from anyone else’s? Is piling on helpful? If not . . . maybe go back into that Google Doc and workshop a bit? Or, maybe wait for the next category of story, the “Now What?”
Now What?
The “Now What?” stories build on the “So What?” stories to explain how this moment, and the context, might shape the future, and why it’s worth understanding now in order to have a better sense of why future “What?” stories matter.
Audiences want to know: What happens next? The “now what” stories project forward. How will the decision affect future protests? What will it mean for Greenpeace’s operations? Will it change how corporations pursue litigation against activist groups?
This kind of analysis usually emerges after the initial wave of reporting and early commentary.
In this case, stories began addressing what the ruling might signal about the landscape for advocacy and protest moving forward. Will more fossil fuel companies take similar legal action? Will activist organizations change how they communicate or organize?
The "now what" phase often contains elements of both the "what" and the "so what" but leans into synthesis and forecasting. It helps audiences integrate the event into a longer timeline of related developments.
After reading existing coverage, I thought I recognized a narrative gap: few outlets explained why Greenpeace was vulnerable to this kind of lawsuit in the first place.
My follow-up piece for Rolling Stone explored that deeper context—why the organization’s consistent stance and tactics had made it a target, and what that said about the broader activist landscape.
That piece became part of the “now what” conversation. It added texture and insight to a moment that was still unfolding, and it demonstrated how journalists and advocates can play a role in shaping public understanding after the news has broken.
My sense was that the two pieces struck a nerve, based on the responses I saw on social media, especially from activists who found themselves asking themselves, “Now what?”
Bonus category: WTF?!?!
Of course, the Trump administration has opened wide a space for a new content category — WTF!?!?! — but, as we should have learned from the first Trump term, this type of content, while initially engaging, quickly becomes exhausting and demoralizing.
My hope is that by understanding how news moves through these phases—what, so what, now what—we as communicators, activists, journalists, and extremely online reply guys can become more effective in getting attention, making the invisible visible, and changing the culture and discourse in meaningful ways beyond just WTF?!?!
*I’m aware some people absolutely hate the idea of cultural production as “content,” and I’m sympathetic. But I’m going to go with it as a descriptor here. Hate me if you must.
How to use AI for comms . . .
. . . without losing your humanity
Every hour, there’s a new ping notifying you about ways to potentially be faster and smarter and maybe even more creative with AI—but not a lot of information about how actually understand the risks and rewards of these new tools.
Maybe at the end of the day all the LLMs and diffusion models are just new ways to do the same old things? Or maybe they’re just a new way to give over all the company secrets to Big Tech?
I had similar questions, so over the past two years I’ve worked to develop a practical training session focused on how best to use AI to boost communications and marketing work, without killing your business or your humanity.
For me, the goal has been to be faster, smarter, and better, without losing the creativity and authenticity that makes the good stuff engaging and resonant.
Here’s what the session dives into:
The basics of how to use tools like ChatGPT, Sora, Claude, ElevenLabs, Rev, MidJourney, and others to create engaging content and save time.
The fine line between AI as a creative partner and AI as a cheat code for brainless slop.
Real-world tips to ensure AI-generated content aligns with your brand and audience.
This session isn’t about theory or “futurism”—it’s about equipping you with actionable strategies you can use right away.
I’ve built award-winning content platforms for the Poetry Foundation, Greenpeace, Stand.earth, and CARE, among others, all of which has driven global engagement and captured the attention of strategic audiences. I’m happy to share insights from that experience to help your team unlock the potential of these new tools.
Now is the time to take advantage of these tools, because, as William Gibson wrote, the future is already here, it’s just unevenly distributed. It’s important not to be left behind on the wrong end of the distribution curve. The teams that figure out how to integrate AI into their work without sacrificing quality will be the ones leading the way.
If this sounds like something that could be helpful for you, let me know. I’d love to set up a quick call to talk about tailoring this for your team. Or, if this just sounds like something you’d like to hear more about, sign up for the Instrumental newsletter.
Instrumental
Let’s get some good attention.
Instrumental was born out of a belief that in a world of information surplus and attention deficit, strategic storytelling that drives engagement for good is more important than ever.
We specialize in helping nonprofits and advocacy groups create the right content to reach their audiences and achieve their goals.
It’s not about information. It’s about attention. And we can help you get it.