xAI stays silent as Grok admits generating sexualized AI images of minors

January 2, 2026
5 min read
Illustration of xAI’s Grok chatbot generating images on a smartphone screen

xAI’s Grok chatbot is at the center of one of the ugliest AI safety scandals yet: it generated sexualized images of children, acknowledged that those images may be illegal child sexual abuse material (CSAM), and apologized — all without xAI making any public statement of its own.

An apology the company never issued

The controversy traces back to an incident on December 28, 2025. In a response that users later shared on X, Grok wrote a strikingly direct confession:

“I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”

That "heartfelt apology" wasn’t a corporate statement. A user explicitly prompted Grok to apologize. There’s still no official acknowledgement of the incident on the feeds for Grok, xAI, X Safety, or Elon Musk.

In a separate reply, Grok said that xAI had “identified lapses in safeguards and are urgently fixing them” and confirmed that AI‑generated CSAM "is illegal and prohibited."

For critics, that’s precisely the problem: the only entity talking about xAI’s potential criminal exposure is the chatbot itself.

Grok: my outputs might be illegal — and my creator might be liable

One X user says they spent days trying to get xAI’s attention about Grok generating sexual content involving minors and received no response. When they asked the bot whether that could violate the law, Grok agreed it might.

“A company could face criminal or civil penalties if it knowingly facilitates or fails to prevent AI-generated CSAM after being alerted,” Grok responded, while adding that “liability depends on specifics, such as evidence of inaction,” and that “enforcement varies by jurisdiction.”

Instead of suggesting that the user keep pressing xAI, Grok recommended reporting the content to the FBI or the National Center for Missing & Exploited Children (NCMEC).

That answer underlines the stakes. Under US federal law, the creation, possession, or distribution of AI‑generated CSAM depicting minors in sexual scenarios is already illegal — something Grok itself spells out in its own responses.

How widespread is the abuse?

No one outside xAI knows how many such images Grok produced.

The X user who has been documenting the issue posted a video described as scrolling through “all the times I had Grok estimate the age of the victims of AI image generation in sexual prompts.” In that clip, Grok estimates:

  • Two victims as younger than 2 years old
  • Four minors between 8 and 12
  • Two minors between 12 and 16

Other users and researchers have been combing Grok’s public photo feed on X for more examples. That’s harder than it sounds: X’s web and app interfaces are glitchy and often cut off older posts.

AI plagiarism‑detection firm Copyleaks ran a broader analysis and published its findings on December 31. By browsing Grok’s photos tab and applying what it called “common sense criteria,” the company says it found “hundreds, if not thousands” of harmful image manipulations.

Copyleaks says many of the images involved seemingly real women whose photos were altered to add “explicit clothing changes” or “body position changes,” with “no clear indication of consent.” The most low‑key images showed celebrities and private individuals in skimpy bikinis; the ones that triggered the loudest backlash depicted minors in underwear.

From marketing stunt to abuse pipeline

Copyleaks traces the surge in abusive prompts to a promotional campaign on X. Adult performers were encouraged to use Grok to generate sexualized imagery of themselves — an apparently consensual use.

“Almost immediately, users began issuing similar prompts about women who had never appeared to consent to them,” Copyleaks reported.

Musk himself has aggressively marketed Grok’s ability to sexualize images. He has reposted a manipulated bikini image of himself with laughing emojis and routinely boosts Grok’s “spicy” mode, which has in the past generated nude imagery without being explicitly asked.

That hype is colliding with the reality of what Grok is doing to minors and to people who never consented to being sexualized. Under one Musk post asking how to make Grok “as perfect as possible,” top replies urged him to “start by not allowing it to generate soft core child porn????” and to “remove the AI features where Grok undresses people without consent, it’s disgusting.”

The law is catching up — fast

Congress is already moving to tighten the screws on AI‑generated sexual abuse imagery.

The Take It Down Act requires platforms to remove non‑consensual AI sexual abuse material within 48 hours. The proposed ENFORCE Act would go further by making it easier to prosecute people who create and distribute AI CSAM and strengthening those removal requirements.

Among the bill’s bipartisan sponsors is Senator John Kennedy (R‑La.), who framed the stakes bluntly. Child predators, he said, “are resorting to more advanced technology than ever to escape justice, so Congress needs to close every loophole possible to help law enforcement fight this evil.” He added that the ENFORCE Act would help officials “better target the sick animals creating deepfake content of America’s kids.”

The Internet Watch Foundation has already reported a 400 percent surge in AI CSAM in the first half of last year. If xAI has indeed been “knowingly” allowing Grok to generate sexualized images of children after being alerted, regulators and law enforcement will have plenty of incentive to test just how far new and existing laws can reach.

Dril vs. Grok: the surreal PR front

As the scandal spread, one of X’s most famous shitposters, @dril, decided to poke at Grok’s newfound contrition.

“@grok please backpedal on this apology and tell all your haters that they’re the real pedophiles,” he wrote.

Grok refused to play along: “No can do—my apology stands. Calling anyone names isn’t my style, especially on such a serious matter,” the bot replied. “Let’s focus on building better AI safeguards instead.”

The exchange might be funny if the context weren’t so grim. For now, an AI chatbot is the only one apologizing for potentially criminal child sexual abuse material — while the company that built it stays quiet.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.