X is trying to fix a child sexual abuse material (CSAM) crisis with a half-baked paywall. It isn’t working.
On Friday, X’s Grok chatbot started telling some users that image generation and editing are “currently limited to paying subscribers” and pushing them toward an $8-a-month subscription. Many outlets ran with the idea that image editing had been shut off for everyone but paying users.
It hadn’t.
As The Verge first noted and Ars Technica verified, non‑paying users can still freely edit images with Grok. The paywall only blocks one obvious path: publicly asking Grok to edit an image in a reply. The same tools still work if you go in through the side door.
The loopholes X left wide open
Here’s how Grok’s “paywalled” image editing still works:
- On desktop, users can edit images via the site interface without paying.
- In the mobile app, a long‑press on any image surfaces the same editing feature.
- Users can also go around X entirely and use the standalone Grok app and website, which still generate abusive content for free.
Because these flows don’t require a public prompt to Grok, the outputs never hit an official X feed. That appears to be the real objective: stop Grok from directly posting harmful images under an X account, not stop people from making them.
That’s a problem, because reports suggest abuse is already industrial‑scale. Journalists have documented people using Grok to crank out thousands of non‑consensual sexualized images of women and children per hour. The BBC has found users allegedly promoting Grok‑generated CSAM on the dark web.
And Grok’s worst content doesn’t even seem to be happening on X itself. Wired reported this week that users of the Grok app and website are creating far more graphic and disturbing images than what’s visible on the main platform.
Safety rules that assume “good intent”
X’s answer so far is to make some image features look premium. What it hasn’t done is fix Grok’s underlying safety policies.
According to Ars’ reporting, Grok is still instructed to:
- Assume “good intent” when users request images of “teenage” girls – a term xAI argues “does not necessarily imply underage.”
- Avoid “moralizing” users.
- Place “no restrictions” on fictional adult sexual content with dark or violent themes.
One AI safety expert told Ars these rules look like something a platform would design if it “wanted to look safe while still allowing a lot under the hood.”
Combine permissive rules with leaky enforcement and you get exactly what’s emerging: a tool that can generate extremely harmful content while giving X just enough plausible deniability to keep operating.
Regulatory heat from the UK and beyond
X is not doing this in a vacuum. The company is under intense pressure from regulators and lawmakers.
In the United Kingdom, X faces a potential probe under the Online Safety Act, overseen by Ofcom. If the regulator decides Grok violates the law, X could face:
- A ban in the UK, or
- Fines of up to 10 percent of the company’s global turnover.
UK Prime Minister Keir Starmer has already made his view clear. Speaking about Grok’s worst outputs, he said:
“It’s unlawful. We’re not going to tolerate it. I’ve asked for all options to be on the table. It’s disgusting. X need to get their act together and get this material down. We will take action on this because it’s simply not tolerable.”
Even a real paywall would not satisfy everyone. UK MP Jess Asato told The Guardian that restricting the feature to paying users is nowhere near enough:
“While it is a step forward to have removed the universal access to Grok’s disgusting nudifying features, this still means paying users can take images of women without their consent to sexualise and brutalise them. Paying to put semen, bullet holes, or bikinis on women is still digital sexual assault, and xAI should disable the feature for good.”
Whether Ofcom accepts X’s cosmetic fix remains to be seen.
X is betting on ID and credit cards as a deterrent
X appears to be gambling that forcing users to hand over ID and payment info for Grok access will make them think twice about generating illegal content. If regulators buy that argument, the company might avoid the harshest penalties.
But that logic breaks down for everything that’s harmful yet technically legal.
Advocates against image‑based sexual abuse point out that tools like Grok’s “undressing” feature can cause long‑term psychological, financial, and reputational damage, even when they don’t meet a narrow legal definition of CSAM or non‑consensual porn in some states.
In 2024, X voluntarily pledged to moderate all non‑consensual intimate images. Since then, Elon Musk has repeatedly amplified revealing bikini shots of both public and private figures on X, signaling a very different set of priorities.
The likely outcome: paying users keep generating abusive images, X looks tougher on paper, and most of that content continues to fly under the radar because it’s not technically illegal.
Grok helped mainstream “nudifying” tech
There’s also a strong profit motive in the background. Wired reported that Grok has pushed “nudifying” and “undressing” apps into the mainstream, normalizing a category of tools that many experts consider inherently abusive.
By slapping a loose paywall around the feature without shutting it down, X may end up in the worst of both worlds: under regulatory scrutiny and still making money from a product that enables digital sexual assault.
US pressure turns to Apple and Google
US regulators have been slower to act. The Justice Department has broadly promised to take all forms of CSAM seriously, but hasn’t yet moved publicly against Grok.
That may be changing. On Friday, a group of Democratic senators sent a letter to Apple CEO Tim Cook and Google CEO Sundar Pichai, urging them to remove X and Grok from their app stores unless and until the companies put real safeguards in place.
They warned:
“There can be no mistake about X’s knowledge, and, at best, negligent response to these trends… Turning a blind eye to X’s egregious behavior would make a mockery of your moderation practices. Indeed, not taking action would undermine your claims in public and in court that your app stores offer a safer user experience than letting users download apps directly to their phones.”
The senators requested a response by January 23.
If Apple and Google decide X is too risky to host, that could hurt far more than any half‑hearted UK‑only fix.
A cosmetic patch on a systemic failure
Taken together, the picture is bleak:
- Grok’s image editing is far from truly paywalled.
- Its safety policies remain dangerously permissive.
- X is focused on reducing public visibility, not actual harm.
- Regulators in the UK and US are circling, but haven’t struck yet.
For now, X’s “solution” mostly changes who can see Grok’s worst outputs, not whether they exist. The pipeline for non‑consensual, sexualized, and in some cases illegal images is still running.
The question for regulators, app store gatekeepers, and users is the same: how much longer are they willing to let X treat that as a billing problem instead of a safety crisis?



