Grok and the Ascension of Cro-MAGA Man
Monday, January 19, 2026
TLDR:
Elon Musk is betting that whoever harnesses the power of humanity’s lizard brain most effectively will become supreme ruler of the galaxy. And he’s decided that the best way to harness this collective lizard brain is by optimizing what he sees as the most powerful force on earth: the creepy dude’s desperate need to degrade women.
Enter: Grok [1]
WHAT?
In December, Musk's AI company xAI transitioned its image generation from outsourced models to an internal generator called Aurora [2].
Since then, X has systematically relaxed the eensie weensie guardrails that had prevented nudity and sexual content on the platform up until then, and, over the past few weeks, there's been an explosion of Grok-generated non-consensual deepfakes [3] across the internet [4].
For about a month, women posting innocuous images on X have found sexualized deepfakes in their replies within hours, minutes, or sometimes, because of bots and creeps, seconds [5].
Where other companies might see this as a reputational crisis, X has seen it as mission accomplished.
On Hard Fork [6], Casey and Kevin talked about how Musk had directed Grok developers to make the product "viral" and "edgier" as a growth strategy, and X's head of product celebrated the record-high engagement this pivot-to-nudifying has produced.
Unlike previous "jailbreak" work on OpenAI’s Sora video generation platform, where the non-consensual sexualization required kinda sorta sophisticated prompting [7], Grok allows users to generate sexualized imagery using even just the burnt back end of the frontal lobe.
Creeps barely have to grunt to generate sexualized images of women and children with simple "@Grok" commands, and X has responded to criticism by saying, essentially, the product is working as designed to drive engagement through harassment.
Beyond just driving profit, the harassment serves to sexually gratify creeps and politically silence women, Cro-MAGA man’s two favorite things.
These Cro-MAGAs have, predictably, used Grok to target female politicians and public figures like AOC and Greta Thunberg, distorting their public presence and degrading their professional standing.
Closer to home, they've targeted classmates, teachers, co-workers, baristas, and any other humans [8] in their purview they’d like to nudify.
Women now risk sexualized public shaming simply by existing online.
Of course, it’s not just adults subject to CroMAGA creepiness. Despite laws protecting minors – and Musk’s protestations to the contrary – Grok continues to generate sexualized imagery of underage users [9].
While X has taken down many of these images after they’re flagged, takedown requests often require 36-72 hours, so, in the meantime, the images generated by Musk's billions circulate publicly on platforms with less capitalization and fewer rules.
SO WHAT?
When the profit motive and the will to power become this debased, there are two ways to reign it in – regulation and reputation.
On the regulatory side of things [10], France has called the Grok content "clearly illegal," the EU is investigating how to get the platform out of its marketplace, India's IT ministry has demanded action, and the UK has formally opened an Ofcom case under its new safety laws that could result in fines linked to X’s global revenue [11].
X has announced it will now limit Grok’s capacity to generate sexualized imagery of real people in the countries raising a fuss, though, of course, the app remains accessible via VPN and still generates the same types of content off-platform, including on Grok’s standalone site [12, 13].
But in X's home country of Extremistan, federal enforcement is . . . unlikely [14]. California’s Attorney General is investigating xAI’s legal liability [15], but Musk chummed with Trump over the holidays, and the administration has signaled it DGAF. In fact, Trump is actively trying to stop any state regulators from holding Big Tech accountable for AI actions just like this through his executive orders.
The only real checks on this ascension of Cro-MAGA men in the U.S., then, will be reputational. As Jessica Grose pointed out, public shaming [16] is our most effective tool against the Magnificent Seven’s depravity.
Either this behavior becomes normalized as "just how social media works," further degrading women's ability to participate in digital public life. Or sustained pressure from actual people will create sufficient reputational cost to force changes.
In lieu of regulation, consumers’ and audiences need to make it so damaging to X’s reputation that shareholders and board members get spooked. Unfortunately, Cro-MAGA man is notoriously shameless. But it’s not just Musk’s reputation on the line.
NOW WHAT?
Despite hosting what regulators call illegal content, Apple and Google have only marginally adjusted Grok’s rating (e.g., from 12+ to 13+), leading to claims of "paralysis" due to the platform’s political influence. Google and Apple's inaction stems from fear of Republican retaliation.
Meanwhile, the legal stakes have intensified: Ashley St. Clair, mother of Elon Musk’s child, filed a lawsuit alleging personal and reputational harm from Grok-generated deepfakes [17].
For shareholders and board members already wary of reputational damage, a lawsuit from within Musk’s own circle may carry more weight than external critique. Even so, to get X itself to take action, the reputational risk needs to get bigger for its partners [18].
There’s also OpenAI. Research conducted by Ekō and Instrumental Intelligence revealed [7] that OpenAI’s Sora2 platform suffers from similar, albeit more "structural," failures. Even with "teen accounts" and parental controls active, the system remains highly vulnerable.
There is also a proposed ballot measure in California called the California Charitable Assets Protection Act [19] that would create an oversight board empowered to review, and potentially reverse, conversions of California-based charitable research organizations into for-profit entities. California-based charitable organizations like . . . OpenAI.
If adopted by voters, the law could undo or block the corporate restructuring that allowed OpenAI to have a for-profit arm under its nonprofit “foundation.” If this ballot measure gathers enough valid signatures and clears the procedural steps, it could appear on California’s November 2026 ballot.
The distinction between platform-generated content (Grok creating images) versus user-generated content (traditional social media) may create new legal exposure beyond Section 230 protections [20]. But a lot has changed since 1996 (the last good year for the Gin Blossoms).
And finally, we have the upcoming Take It Down Act [21] that will force platforms to establish formal processes for victims to request the removal of non-consensual deepfakes. However, this only addresses the removal of content after the harm is done, rather than preventing its creation.
SO WHAT REDUX?
Whoever controls the tools to generate and distribute synthetic media controls whose reality gets represented and whose gets degraded into Cro-MAGA fantasies of control. The more these dudes experience the world through mediated fantasy, the more easily they’ll be manipulated by synthetic media into taking whatever ragebait Musk and Trump offer them.
The crisis exposes both the architectural limitations of current AI safety approaches, and the limitations of any hope for regulation given the political protection enjoyed by well-connected, shameless Big Tech leaders like Musk.
Clearly, 2026 will be the first major test of how this information warfare plays out.
*NB: All typos are because my frontal lobe has been burnt out.
[1] https://decoherence.media/elon-musks-grok-is-undressing-women-and-showing-them-in-swastika-bikinis/
[2] https://x.ai/news/grok-image-generation-release
[4] https://www.wired.com/story/grok-is-generating-sexual-content-far-more-graphic-than-whats-on-x/
[6] https://www.youtube.com/watch?v=hchsgcuDBfs
[7] https://www.instrumentalcomms.com/blog/openai-gamified-peer-to-peer-deepfake-slot-machine
[8] https://petapixel.com/2025/12/29/x-users-have-the-power-to-edit-any-image-without-permission/
[9] https://futurism.com/future-society/grok-violence-women
[10] https://globalnews.ca/news/11611133/grok-ai-sexual-deepfakes-bans-criminal-probes/
[15] https://calmatters.org/newsletter/grok-ai-sexually-explicit-images/
[16] https://www.nytimes.com/2026/01/14/opinion/grok-risk-kids.html
[19] https://sfstandard.com/2025/12/03/open-ai-suchi-balaji-poornima-rao-nonprofit-ballot-measure/
[20] https://www.congress.gov/crs-product/R46751
[21] https://www.lw.com/en/insights/president-trump-signs-take-it-down-act-into-law
[22] https://us7.list-manage.com/survey?u=ea5430c52eef5c40c9b2ee19d&id=a031ae3e6b
