In the Valley of AI Underdeliverance

Loading…
In the Valley of AI Underdeliverance
͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌    ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­

TLDR: Top-down AI overhauls aren’t working; Peer-to-peer tool sharing and exploration is good; The machines train us as much as we train the machines, so be very wary of “sycophantic” AI gassing up people who have a lot of org power but not a lot of implementation experience.

AI can seem magical when it produces a plausible plan or draft. But “plausible” doesn’t mean “usable,” and when AI tools are handed down by people removed from the work, it tends to overpromise and underdeliver.


To put it even more bluntly, if your job is to approve plausible-sounding proposals and to then hand the implementation work off to other teams, you might be on the express train to the Valley of Underdelieverance.


On the other hand, if your job is to receive proposals from higher ups that sound “stretch-goal”  plausible, but you know from experience these plans won’t survive first-contact with reality, then your experience of AI is likely different from the top-down policy people. You are probably one of the quazillion people who find LLMs super helpful to offload bureaucratic tasks and replacement-level cognition.


You are an “implementer.”*



(Congrats!)


LLMs are great at the advanced autocomplete work of bureaucracy, but people don’t like to admit this is what they’re assigning, or what their org might be doing.


Tools like ChatGPT, Claude, and NotebookLM can increase efficiency, but only when used by people well-versed in doing the actual work of writing, designing, planning, coding, responding, etc etc in real time, aka the people who know what’s supposed to be good and what’s supposed to just check boxes.


Confusing the two types of work just embiggens the bullshit, and it’s one of the main reasons people are finding some of the implementation to be underwhelming.

LLMs are born of bureaucracy, and so they’re very good at perpetuating bureaucracy. Which, in many jobs, is great. (It was a revelation for me when I first realized that, sometimes, people don’t actually want things to be *good,* they just want them to be *done.* And, in fact, *good* work might cause headaches because it doesn’t slide through the system as easily.)


Right now, integrating AI is more like implementing a new project management program than quantum leaping to the superintelligent future. Not being aware of this can lead to even more inefficiency, bureaucracy, and vendor lock-in.

Organizations pushed into top-down integrations with products like Salesforce AI or Microsoft Copilot may find themselves even more dependent on closed systems and external “experts,” submitting more tickets and waiting on the high priests of technology to waft some incense and enter mysterious keystrokes to unlock new layers of confusion.

The solution isn’t more tools. It’s better alignment between strategy and implementation.

From my conversations and workshops over the summer, it feels like the best way to avoid the Theory/Practice trap is to gather together the implementers on your teams and share how everyone is actually using AI in their work and home lives. What works, what doesn’t, what you’d like to try.

This is basic skill-and-tool sharing, but it has been incredibly helpful for the people I’ve talked to as a way to keep the focus on everyday tasks, rather than transhumanism or the race to the bottom of the slop barrel. And anyways, there are too many AI innovations for any one person to keep up with all of them (believe me! I’m trying!), so having a group of people to bounce ideas off of is super helpful. A regularly scheduled “Tool Time” isn’t the worst idea.


In my own little workshop-of-one, I’ve been building and testing tools that try to incorporate the feedback I’ve heard from teams.


The most basic tool is this general-purpose assistant,
The Instrumental PR Generator, tailored for nonprofit messaging, content development, and storytelling. Originally I had it just generating PRs, which is useful, but it now can look at existing PRs and give suggestions for revisions, which is a great, easy check for work before it goes to the final stage of approval.

Next we have
InstrumIntel, the custom comms assistant that helps synthesize research, organize content, and streamline communications workflows. So far, it’s performed really well doing media monitoring summaries and making things sharable, but it still needs a guiding to make sure it’s staying focused (it’s also gotten pretty good at writing New York Post-style headlines and #HaikuTheNews posts).


Lastly, there is the Trump Logic Generator: a tool to identify and decode rhetorical tactics and bad-faith argument structures. In some ways, I was just seeing if I could give it a formula to execute (similarly, I think to what the NYT was doing with Jaws), but then I realized it actually does seem to work and Trump becomes less of a chaos agent knowing the patterns.


I designed these alongside the broader AI workshop I’ve been leading for communications and marketing teams, focused on integrating AI into daily work without losing the human elements that make stories matter.


They’ve been helpful getting teams to experiment with AI in safe, useful, and “brand-aligned” ways, but also to see how a person can fence in the general purpose models to be more bespoke, which is the biggest innovation so far I think since the OG ChatGPT launch.


I’m interested to hear how and what you’re doing with AI in your campaign/comms work, too, so please get in touch. It’s important amidst all the party tricks to remember we need to innovate to WIN, and so I’d love help keeping grounded. I tend to get lost in the “fun,” so . . . . feedback welcome!

*Who are The Implementers? They are, broadly speaking, the people who manage the information through the last keystroke, pressing whatever button makes a thing public. That could be posting it on social, publishing it on the website, sending it over email, or putting the final numbers in the slide deck.


Bonus Thoughts!

Previous
Previous

Goog Stock and the posse comitatus act

Next
Next

No data for MAGA & The Sad Parade