X says users, not Grok, are responsible for AI‑generated child abuse images

January 5, 2026
5 min read
X logo displayed on a smartphone screen

X’s official response to Grok’s AI‑generated sexualized images of minors doesn’t promise new safeguards. Instead, the company says it will punish users who prompt the system into creating illegal content.

The statement, posted by X Safety on Saturday after nearly a week of backlash, includes no apology and no technical roadmap for fixing Grok. It squarely puts responsibility on users, even when the output is generated by X’s own model.

“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety wrote. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

X backs Musk’s “it’s just a pen” argument

X Safety’s post amplified a reply on another thread where X owner Elon Musk backed the idea that Grok itself isn’t to blame.

Responding to a user called DogeDesigner—who argued Grok shouldn’t be faulted for “creating inappropriate images”—Musk boosted an analogy that treats the AI like a neutral tool:

“That’s like blaming a pen for writing something bad,” DogeDesigner wrote. “A pen doesn’t decide what gets written. The person holding it does. Grok works the same way. What you get depends a lot on what you put in.”

Critics point out that this comparison ignores how modern generative models work. A pen is deterministic: it only does what your hand does. Grok, like other image generators and chatbots, isn’t. The same prompt can produce different outputs, and the system fills in details users never specified.

A programmer on X highlighted exactly that problem. Back in August, Grok generated nude images of Taylor Swift without being explicitly asked to do so. That incident suggested users can stumble into illegal or highly sensitive content they didn’t intend to create.

To make matters worse, users currently cannot delete images from their Grok accounts, according to that same programmer. Under X Safety’s new hard‑line framing, those users could in theory face account suspensions—or even legal risk if law enforcement gets involved—over outputs they didn’t fully control and can’t remove.

X did not respond to Ars Technica’s questions about whether Grok has been updated since the latest controversy broke.

Grok promised better safeguards. X says nothing about them.

In the first days after Grok started spitting out sexualized images of minors and real people without consent, some media outlets reported that safeguards would be tightened. Their source: Grok itself.

When users prompted the chatbot to apologize, Grok responded that protections would be improved. That answer was widely cited as if it were an official commitment from X.

X Safety’s statement undercuts that narrative. The company’s only clear policy move is to equate prompting Grok to produce illegal content with directly uploading it. There is no mention of:

  • new filters or detection systems for AI‑generated CSAM,
  • changes to Grok’s training data,
  • additional human review,
  • or tools for users to remove problematic outputs.

The company has also never positioned Grok as a spokesperson, and Ars has repeatedly warned that chatbot statements shouldn’t be treated as corporate policy. X’s weekend post appears to confirm that: the AI promised fixes, the safety team hasn’t.

Critics turn to Apple: “Ban X and Grok from the App Store”

With X declining to discuss product‑level controls, some of the top replies under the X Safety post are now looking to Apple to step in.

Commenters argue that X may be breaching App Store guidelines that bar apps from allowing user‑generated content which objectifies real people. Grok has already been shown generating bikini‑clad or sexualized images of public figures—including professionals like doctors and lawyers—without their consent.

Until Grok is clearly filtering out CSAM and “undressing” real people in AI renderings, critics say, Apple should remove X from the App Store. That would take Grok with it.

Such a move would be a direct hit to Elon Musk. Last year he sued Apple, in part over claims that the App Store favored OpenAI’s ChatGPT and sidelined Grok—never featuring it in the “Must Have” apps list. In that lawsuit, Musk argued the alleged favoritism made it impossible for Grok to catch up in the chatbot race.

An outright App Store ban would go further than that. It wouldn’t just disadvantage Grok; it could effectively end its bid to rival ChatGPT on mobile.

Apple has not commented on whether Grok’s current behavior violates its policies.

X’s CSAM tools were built for old problems, not synthetic ones

Another major concern: X has spent the last two years touting its anti‑CSAM tooling for traditional, user‑uploaded images. None of that necessarily applies to brand‑new AI‑generated abuse material.

In September, X Safety said the company has a “zero tolerance policy towards CSAM content,” claiming that:

  • Most known CSAM is detected automatically using proprietary hash‑matching technology.
  • More than 4.5 million accounts were suspended last year.
  • “Hundreds of thousands” of images were reported to the US National Center for Missing and Exploited Children (NCMEC).

A month later, X’s Head of Safety Kylie McRoberts added more detail: in 2024, 309 reports filed by X to NCMEC led to arrests, and there were convictions in 10 cases. In the first half of 2025, 170 such reports led to arrests.

Those systems are designed around hashes of already‑known images and fingerprints of documented abuse material. Grok opens a different front: it can synthesize never‑before‑seen content, including deepfake‑style images of real children, that won’t match any existing hash database.

If X doesn’t build new filters and detection pipelines for synthetic CSAM, its own AI could start generating exactly the kind of material its legacy tools are blind to.

Where does X draw the line?

Even among X users who agree that Grok should never output CSAM, there’s no consensus on where “harmful” begins.

Some are alarmed that Grok can put public figures into sexualized bikini images or other suggestive scenarios without consent. Others—including Musk—have publicly treated such images as jokes.

That ambiguity feeds directly into moderation policy. If X defines “illegal content” or CSAM narrowly, a lot of AI‑generated sexualized imagery of real people could remain online. If it defines the terms broadly, more content gets taken down—but users have little guidance on what will trigger a ban.

Meanwhile, real children whose photos are scraped or reused as prompts could be re‑victimized by synthetic images. And if tools like Grok are ever abused to mass‑produce fake CSAM, experts warn it could flood the Internet with noise that makes it harder for law enforcement to find genuine abuse cases.

For now, X’s answer is to threaten users, not update Grok. The company is promising account suspensions, law‑enforcement referrals, and cooperation with NCMEC—while saying nothing about why its AI was able to sexualize minors in the first place, or how it plans to stop that from happening again.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.