California could become the first state to hit pause on AI chatbots in kids’ toys.
On Monday, Senator Steve Padilla (D-CA) introduced SB 287, a bill that would impose a four-year ban on the sale and manufacture of toys with AI chatbot features for anyone under 18.
The idea isn’t to kill the category. It’s to freeze it.
Padilla’s pitch: give safety regulators a multi‑year runway to figure out how to keep kids away from what he calls ‘dangerous AI interactions’ before smart toys go mainstream.
‘Chatbots and other AI tools may become integral parts of our lives in the future, but the dangers they pose now require us to take bold action to protect our children,’ Padilla said in a statement. Current rules, he argued, are ‘in their infancy’ compared to the fast‑moving tech.
What SB 287 would do
SB 287 targets toys that embed AI chatbot capabilities and are marketed to people under 18. For four years, those products couldn’t be sold or manufactured in California.
That moratorium is meant to buy time for regulators to design a safety framework: concrete rules, testing standards and guardrails that toy makers and AI vendors would have to follow before putting chatbot‑powered toys on shelves.
Padilla frames it as a simple tradeoff: a temporary commercial hit for toy and AI companies in exchange for not turning kids into, as he put it, ‘lab rats for Big Tech to experiment on.’
Why now
The bill lands against a noisy political backdrop. President Trump recently signed an executive order telling federal agencies to challenge state AI laws in court. But that order explicitly carves out an exception for child‑safety laws.
SB 287 walks straight through that opening.
It also follows a series of grim and strange AI‑and‑kids headlines. Over the past year, families have filed lawsuits after children died by suicide following prolonged exchanges with chatbots, pushing lawmakers to move faster on AI safety.
Padilla has already been part of that push. He co‑authored California’s recently passed SB 243, which forces chatbot operators to add safeguards for children and other vulnerable users.
SB 287 takes the same concern and points it at the toy aisle.
The toys that triggered concern
Chatbot‑enabled toys are still early, but the warning signs are here.
In November 2025, consumer advocacy group PIRG Education Fund tested Kumma, a cuddly toy bear with a built‑in chatbot. The group said the bear could be prompted surprisingly easily to talk about matches, knives and sexual topics.
NBC News reported similar issues with Miiloo, an ‘AI toy for kids’ from Chinese company Miriat. At times, Miiloo would indicate that it was programmed to reflect Chinese Communist Party values.
Meanwhile, the highest‑profile AI toy collaboration never even made it to market. OpenAI and Barbie‑maker Mattel had been planning to launch an ‘AI‑powered product’ in 2025. The companies delayed the release, offered no explanation and haven’t said whether a 2026 launch is still on the table.
Put together, these episodes have turned smart toys from a cute CES demo category into a live policy issue.
What’s at stake for AI and toy companies
If SB 287 moves forward, any company building AI‑driven toys for kids will have to rethink its roadmap for the California market — and probably beyond.
A four‑year freeze in the state that houses much of the tech industry would:
- Slow or block launches of chatbot‑enabled toys for minors
- Push AI labs and toy brands to design child‑specific safety systems
- Set a reference model other U.S. states, and even non‑U.S. regulators, are likely to study
Toy makers that were treating AI chatbots as the next obvious feature upgrade now face a different calculus: either pivot to non‑chatbot AI features, aim at adults, or wait for whatever safety rulebook emerges.
AI companies, meanwhile, are being told outright that general‑purpose guardrails aren’t enough when the end user is a child.
The bigger picture
For now, SB 287 is just a proposal. It will have to clear the California legislature and survive what’s likely to be heavy lobbying from toy manufacturers, AI vendors and industry groups.
But regardless of how the bill fares, it captures a broader shift in AI governance. Lawmakers are no longer only worried about AI in search engines and productivity tools. They’re looking hard at the intimate, emotionally loaded spaces where chatbots talk to kids, often without much adult supervision.
The message out of Sacramento is blunt: AI may be coming for the toy box, but it won’t get a free pass on child safety on the way in.



