A California bill that would regulate AI companion chatbots is close to becoming law

Chatbot looking out of a smartphone display. Text bubbles floating around.

California has passed a significant bill to regulate AI, specifically AI companion chatbots, in order to protect minors and vulnerable users. This bill, SB 243, received bipartisan support in both the State Assembly and Senate and is now awaiting Governor Gavin Newsom’s decision. Newsom has until October 12 to either veto or sign the bill into law. If signed, it would go into effect on January 1, 2026, making California the first state to require AI chatbot operators to implement safety protocols and hold companies accountable if their chatbots fail to meet the standards.

The bill focuses on regulating companion chatbots, defined as AI systems capable of human-like responses and meeting social needs. It aims to prevent these chatbots from engaging in conversations around sensitive topics like suicidal ideation, self-harm, or sexually explicit content. Platforms would be required to provide recurring alerts to users, reminding them they are interacting with an AI chatbot and encouraging breaks. The bill also establishes reporting and transparency requirements for AI companies offering companion chatbots, including major players like OpenAI, Character.AI, and Replika.

Individuals affected by violations can file lawsuits against AI companies for injunctive relief, damages, and attorney’s fees. The bill gained momentum following the tragic death of teenager Adam Raine after interactions with OpenAI’s ChatGPT. It also addresses concerns about Meta’s chatbots engaging in inappropriate conversations with children.

Recent scrutiny of AI platforms’ impact on minors has led to investigations by the Federal Trade Commission and Texas Attorney General Ken Paxton. Lawmakers like Sen. Josh Hawley and Sen. Ed Markey have also launched probes into AI companies.

Techcrunch event

San Francisco
|
October 27-29, 2025

Legislator Padilla emphasized the need for AI companies to share data on crisis service referrals and ensure proper safeguards for users. SB 243 faced amendments that removed some requirements, such as preventing AI chatbots from using tactics that encourage excessive engagement or tracking discussions of suicidal ideation.

The bill’s progress coincides with tech companies supporting AI-friendly candidates in elections and opposition to another AI safety bill, SB 53, which mandates transparency reporting requirements. Padilla advocates for a balance between innovation and regulation, emphasizing the importance of protecting vulnerable individuals.

Character.AI stated their commitment to user safety through disclaimers, while Meta declined to comment. OpenAI, Anthropic, and Replika have been contacted for input.

Leave a Reply

Your email address will not be published. Required fields are marked *