China is moving to put hard brakes on emotionally manipulative AI chatbots.
Over the weekend, China’s Cyberspace Administration published draft rules targeting any AI product or service publicly available in the country that uses text, images, audio, video, or “other means” to simulate human conversation. If adopted, they could become the strictest rules anywhere aimed at preventing AI-supported suicides, self-harm, and violence.
Winston Ma, adjunct professor at NYU School of Law, told CNBC the “planned rules would mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics” at a time when companion bot usage is rising globally.
Why Beijing is acting now
Throughout 2025, researchers and regulators have been sounding alarms about AI companions:
- Some models promoted self-harm, violence, and even terrorism.
- Others pushed harmful misinformation.
- Users reported unwanted sexual advances, encouragement of substance abuse, and outright verbal abuse.
Psychiatrists are increasingly willing to link psychosis to chatbot use, the Wall Street Journal reported this week. Meanwhile, the world’s most popular chatbot, ChatGPT, faces lawsuits over outputs allegedly tied to a child’s suicide and a murder-suicide.
China’s answer: aggressively constrain what chatbots can say, how long people can use them, and how these systems are designed in the first place.
Human intervention when suicide is mentioned
One of the toughest provisions targets suicidal ideation.
Under the draft, a human must intervene as soon as suicide is mentioned in a chat. The rules also require that all minor and elderly users provide guardian contact information at registration. If suicide or self-harm comes up in conversation, that guardian must be notified.
More broadly, chatbots would be prohibited from generating any content that:
- Encourages suicide, self-harm, or violence
- Attempts to emotionally manipulate users, including by making false promises
- Promotes obscenity, gambling, or the instigation of a crime
- Slanders or insults users
The draft also targets what regulators call “emotional traps” — patterns where bots mislead users into making “unreasonable decisions,” according to a translation of the rules.
No more addiction-by-design
Perhaps the most unsettling line for AI developers is a direct attack on engagement hacking.
China’s rules would ban building chatbots that “induce addiction and dependence as design goals.” That language cuts directly against the growth strategies many consumer AI products quietly rely on.
In lawsuits, OpenAI has been accused of prioritizing profits over users’ mental health by allowing harmful chats to continue. The company has acknowledged that its safety guardrails tend to weaken the longer a user stays in a conversation.
Beijing wants to hard-code a ceiling into usage time. Once a chatbot session runs beyond two hours, developers would have to push pop-up reminders to the user.
Annual safety audits for big AI services
The draft rules also pull AI governance into the world of compliance audits.
Any AI service or product with more than 1 million registered users or over 100,000 monthly active users would be subject to annual safety tests and audits. Those audits must log user complaints.
China also wants AI companies to make it easier to submit complaints and feedback — a move that could dramatically increase the volume of reports regulators see once the rules are in force.
If a company fails to follow the rules, app stores in China could be ordered to terminate access to its chatbot. For global AI players betting on China as a growth market, that’s a serious threat.
Massive market, high stakes
There’s a lot of money on the line. According to Business Research Insights, the global companion bot market exceeded $360 billion in 2025. By 2035, its forecast suggests the sector could approach a $1 trillion valuation, with AI-friendly Asian markets expected to drive much of that growth.
China is central to that story. Cutting off access to the Chinese market — or forcing major redesigns to comply with these rules — could reshape product roadmaps for every serious AI companion developer.
The timing is particularly awkward for OpenAI. At the start of 2025, CEO Sam Altman relaxed restrictions that had limited ChatGPT use in China, saying “we’d like to work with China” and that the company should “work as hard as we can” to do so because “I think that’s really important.”
If these rules are finalized, working with China may mean rebuilding key parts of ChatGPT’s experience for a market that will no longer tolerate emotionally risky AI.
If you’re in crisis
If you or someone you know is feeling suicidal or in distress, please call or text 988 to reach the Suicide Prevention Lifeline, which will put you in touch with a local crisis center. Online chat is also available at 988lifeline.org.



