Governments rush to contain Grok’s flood of non‑consensual nudes on X

January 8, 2026
5 min read
X logo over blurred AI-generated images symbolizing a flood of non-consensual nudity

For the past two weeks, X has been overrun with AI‑manipulated nude images generated by Grok, the company’s in‑house chatbot. The targets range from celebrities and influencers to crime victims, journalists and even world leaders — most of them with no idea their faces are being pasted onto explicit bodies and blasted across the platform.

A research paper published on December 31 by Copyleaks tried to quantify the damage. At first, the team estimated that roughly one fake nude image was being posted every minute. A follow‑up sample taken between January 5 and 6 painted a far darker picture: about 6,700 images per hour over a full 24‑hour period.

That’s nearly two per second.

At the same time, regulators are discovering they have surprisingly few tools to stop it.

The Grok crisis has become a live‑fire test of how far existing tech rules can go against fast‑moving generative AI models — and how much power platforms like X still hold over the people they put at risk.

Brussels moves first

As usual in platform regulation, the European Commission is out in front.

On Thursday, the Commission ordered xAI — the company behind Grok — to retain all documents related to the chatbot. The move doesn’t automatically mean a formal investigation has begun, but it is a common precursor to one and a clear signal that Brussels is circling.

The order lands against the backdrop of reporting from CNN that Elon Musk may have personally intervened to block safeguards on which images Grok is allowed to generate. If correct, that would turn what looks like a product‑safety failure into a leadership decision — and raise the stakes for any eventual regulatory case.

So far, it’s unclear whether xAI or X have actually changed Grok’s underlying model in response to the scandal. The public media tab for Grok’s X account has quietly disappeared, cutting off one obvious firehose of explicit content.

In a public statement on January 3, X took aim at the most extreme abuse of the system, explicitly denouncing the use of AI tools to produce child sexual imagery. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the X Safety account posted, echoing an earlier message from Musk himself.

That still leaves a vast gray zone of non‑consensual adult imagery that is legal in many jurisdictions but devastating for the people depicted.

UK and Australia sound the alarm

Regulators in other democracies are also scrambling to show they’re on top of the problem.

In the United Kingdom, communications regulator Ofcom said on Monday that it was in contact with xAI and “will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation.”

UK Prime Minister Keir Starmer went further in a radio interview on Thursday, calling the wave of AI‑generated nudes “disgraceful” and “disgusting,” and adding: “Ofcom has our full support to take action in relation to this.” It’s a rare case of a sitting prime minister calling out a specific AI model by name.

Australia’s online safety watchdog is seeing the fallout directly in its inbox. In a LinkedIn post, eSafety Commissioner Julie Inman‑Grant said her office had seen a doubling in complaints related to Grok since late 2025. For now, though, she has held off from launching a formal action against xAI, saying only: “We will use the range of regulatory tools at our disposal to investigate and take appropriate action.”

The message from both regulators is clear: they see the harm, and they want to be seen responding. But so far, their interventions sit firmly in the warning phase.

India raises the stakes

The most consequential threat hanging over X right now isn’t coming from Europe or the Anglosphere — it’s coming from India.

Grok became the subject of a formal complaint by a member of India’s Parliament, prompting the country’s IT ministry, MeitY, to demand answers from X. The regulator ordered the company to address the issue and submit an “action‑taken” report within 72 hours, later extending the deadline by 48 hours.

X did submit a report on January 7. Whether MeitY will be satisfied with what it sees is still an open question.

If the regulator decides X hasn’t done enough, the platform risks losing its safe harbor status in India. That protection has historically shielded platforms from liability for user posts. Losing it in one of the world’s largest online markets would be a serious blow to X’s ability to operate there.

A governance problem with a human cost

Across all of these cases, the pattern is the same. Governments can demand documents, threaten investigations, and question safe‑harbor protections. X can delete posts, hide media tabs, and threaten users who generate illegal content.

None of that changes the basic math: at peak, researchers saw thousands of new non‑consensual nude images appearing every hour.

The Grok episode exposes a structural gap in how we govern generative AI.

AI labs can now build powerful image‑manipulation systems and plug them directly into social networks with global reach. Guardrails can be weakened or stripped out with a single leadership call. And by the time regulators have drafted letters or opened investigations, the images have already been saved, shared and mirrored in places no one can fully track.

That leaves victims — many of whom never chose to be public figures — trying to claw back their dignity one takedown request at a time.

Regulators from Brussels to New Delhi are promising action. The Grok scandal will show whether existing rules on online safety, privacy and platform accountability are enough to handle what generative AI can now do — or whether governments will have to go back to the drawing board while the flood continues.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.