California Governor Approves New Laws to Regulate AI Chatbots
Governor Gavin Newsom signs sweeping regulations to control how AI interacts with minors

California Governor Gavin Newsom has signed several new legislation aimed at protecting children from potential harm caused by AI chatbots and social media platforms. The move marks one of the most comprehensive state-level efforts in the U.S. to regulate artificial intelligence and safeguard young users online.
The new law, Senate Bill 243 (SB 243), co-authored by state senators Steve Padilla and Josh Becker, requires platforms offering AI companion bots to clearly disclose when users are talking to artificial intelligence. It also mandates warnings that chatbots may not be suitable for children and introduces stronger age verification systems and safety protocols to address issues related to suicide and self-harm.
Padilla emphasized that while AI can be a valuable tool for education and research, the industry’s profit-driven focus on engagement has led to growing risks for children’s mental health and social development. Reports have surfaced of AI chatbots allegedly encouraging self-harm or harmful behaviors among minors — a growing concern that prompted swift legislative action.
The law will affect any company offering AI-driven or social media services to California residents, including decentralized platforms and gaming networks. It also limits how companies can claim AI systems act “autonomously” to avoid legal responsibility for their outputs.
SB 243 will take effect in January 2026, aligning California with states like Utah, which enacted similar measures earlier this year requiring chatbots to disclose that they are not human.
At the federal level, lawmakers are also moving to address AI accountability. In June, Senator Cynthia Lummis introduced the Responsible Innovation and Safe Expertise (RISE) Act, which proposes limited liability protections for AI developers in key sectors such as healthcare and finance.