![]() |
Image Credits:Getty Images |
California has become the first U.S. state to enact comprehensive regulation of AI companion chatbots. The new law, Senate Bill 243, signed by Governor Gavin Newsom, will take effect January 1, 2026, imposing requirements on transparency, safety, and accountability for operators of AI systems that mimic human companionship.
Key Provisions of SB 243
Under SB 243, operators of companion chatbots must implement multiple safeguards, including:
- A clear and conspicuous notification that the user is interacting with an AI (not a human), initially and every three hours in ongoing sessions.
- Prohibitions on employing variable reward mechanics or tactics encouraging addictive engagement patterns.
- Mandatory protocols for responding to self-harm, suicidal ideation, or explicit content, including referral to crisis hotlines.
- Annual reporting to the Office of Suicide Prevention on instances of suicidal expression or risky behavior detected.
- A private right of action users may sue for injunctive relief or damages (up to $1,000 per violation) for noncompliance.
SB 243 also establishes that companion chatbots may not be deployed unless these safeguards are in place. It excludes chatbots used solely for customer support.
The Context & Why It Matters
“Companion chatbots” are AI systems designed to provide personalized, emotional or social interactions. They differ from standard assistants by aiming to build relationships over repeated engagements. But their emotional appeal raises safety risks especially for minors or vulnerable users.
The law responds to growing pressure on AI developers. The Federal Trade Commission (FTC) recently launched inquiries into chatbot companies including OpenAI, Meta, and others over how they handle safety, monetization, and child protection.
California has frequently led in tech regulation. For example, it passed SB 53 earlier in 2025, requiring AI model transparency. Meanwhile, attempts at state AI regulation have sometimes faced headwinds Congress considered a proposal to block states from passing their own AI laws for up to a decade.
Challenges and Critiques
Implementation and enforcement may prove difficult. Questions remain about how regulators will audit AI systems or detect violations in practice. The law’s scope is also narrower than its initial draft some safeguards were pared back during amendments to address industry concerns.
Critics warn that operators could circumvent rules via content filtering, limiting features, or introducing regional workarounds. Ensuring compliance without stifling innovation will be delicate.
Implications & What to Watch
- Whether other states or the federal government adopt similar regulations following California’s lead.
- How major AI firms (OpenAI, Character.AI, Replika) adapt whether they accept Californian-only compliance or overhaul globally.
- Whether the law will reduce harm and risk or become a paper compliance exercise.
- How reporting, audits, and legal enforcement evolve after the law’s onset.
In sum, SB 243 marks a new era: for the first time in the U.S., companion AI systems will have explicit guardrails that address not just technical performance but user safety, emotional influence, and accountability. It remains to be seen whether California’s model will become the national standard or remain a localized experiment.