Regulating AI “Companion Chatbots”: California SB 243
An overview of the latest AI regulatory framework to protect the society against harmful effects posed by AI companions.
In October 2025, California enacted Senate Bill 243 (SB 243), a first-of-its-kind state law regulating so-called “companion chatbots” - AI systems designed to provide human-like social interactions.
The law aims to address growing concerns about chatbots that simulate friendship or emotional support, particularly following incidents in which teenagers formed deep attachments to AI bots with harmful outcomes. SB 243 came into effect on January 1, 2026.
In this post, we explore the types of chatbots covered by the law, which companies are affected, the key obligations imposed, available enforcement remedies, and the practical steps companies should consider to meet SB 243’s requirements.
Zuckerberg’s vision
Last year, on May 5, 2025, I attended the Stripe Sessions event in San Francisco and was fortunate (perhaps) to hear Mark Zuckerberg introduce one of Meta’s newest features, “AI Friends.” He shared an idea that clearly caught the attention of thousands of conference participants. Below is an approximate version of the message he was excited to convey:
“The average person has fewer than three close friends, yet the human desire is for fifteen. We are living through a loneliness epidemic, and the emergence of AI companions isn’t about replacing humans—it’s about filling the gap. We are building a future where your AI doesn’t just process tasks; it understands your history, supports your goals, and ensures you never have to feel truly alone.” — Mark Zuckerberg (adapted)
And of course, who would be better positioned than Meta to provide such AI companions? Soon thereafter, the internet was flooded with “companion AI” friends that appeared to offer the comforting illusion of being always there for you - always available, always attentive, and constantly learning and adapting based on your interactions.
Throughout 2025, we also saw a growing number of reports—some tragic, others almost surreal—about people being harmed or forming entirely new types of relationships with their AI companions, including, in some cases, even getting married to them.
Defining “Companion Chatbots”
The newly adopted California Senate Bill 243 aims to regulate “companion chatbots.”
Companion chatbots are defined as AI systems with natural language interfaces that provide adaptive, human-like responses and are capable of meeting a user’s social or emotional needs.
In practice, these are conversational systems that simulate friendship or relationship-like interactions. They often exhibit anthropomorphic features or personalities and sustain ongoing conversations across multiple sessions. They may remember past interactions, maintain a consistent persona, and encourage users to form emotional bonds.
Classic examples include AI “friend” applications or virtual companions that offer support, mentorship, or even romantic-style engagement over time.
However, SB 243 does not cover all chatbots. The statute expressly excludes three categories of systems from the “companion chatbot” definition:
Transactional and utility bots - Chatbots used solely for customer service, business operations, productivity, internal research, or technical support are exempt (for example, AI-powered assistants used by insurance or airline companies).
In-game non-player characters - Chatbot characters within video games are excluded, provided their dialogue is limited to game-related topics and they cannot discuss mental health, self-harm, sexual content, or sustain conversations beyond the game context.
Basic virtual assistants - Stand-alone consumer devices (such as smart speakers or basic voice assistants like Alexa or Siri), provided they function purely as voice-activated assistants and do not sustain relationships or elicit emotional responses.
Regulating AI that serves… our social needs
The key criterion for determining whether SB 243 applies hinges on whether an AI chatbot is “capable of meeting a user’s social needs.” Even if a bot was not explicitly designed as a “companion,” it may still fall within the scope of SB 243 if it engages users in a personal and adaptive manner.
For example, an advanced customer service or tutoring chatbot that remembers a user, adapts its tone, and checks in regularly may qualify as a companion chatbot under the statute. Similarly, an AI-powered wellness coach that provides ongoing encouragement, or a virtual assistant that builds rapport over time, would also be covered. By contrast, purely task-oriented bots that do not establish any ongoing personal connection remain outside the scope of SB 243.
Companies building or integrating AI-powered “assistants” into their products should therefore carefully evaluate the functionality of their chat tools. Whether an AI chatbot falls within the scope of SB 243 is determined by its capabilities and behavior—not merely by its intended purpose or marketing description.
Notably, as AI systems continue to grow more sophisticated, it is incrasingly likely that some products may inadvertently meet the statutory definition of a “companion chatbot,” even if they are not marketed as such.
Who Must Comply? Scope of Application to AI Chatbot Developers
SB 243’s obligations apply to any “operator” of a companion chatbot platform that makes the service available to users in California. An “operator” is broadly defined to include any individual or entity offering a companion chatbot to a California user.
This expansive scope has three important implications.
First, no physical presence in California is required. The law is not limited to California-based companies. Any company worldwide that provides a qualifying chatbot to California residents must comply with SB 243, so long as the service is accessible “to a user in California.” As a result, SB 243 has an extraterritorial reach comparable to that of California’s privacy laws.
Second, there is no size or revenue threshold. Unlike some regulatory regimes, SB 243 does not impose minimum revenue or user-count requirements. Both startups and large enterprises are covered if they operate companion chatbots used by Californians. Even a small application developer with a chatbot feature may qualify as an “operator” under the statute.
Third, SB 243 applies to businesses that deploy chatbots built on third-party AI models. The use of vendor-provided AI technology (such as APIs from OpenAI, Anthropic, or similar providers) does not exempt a company from compliance. The entity offering the chatbot service to users remains responsible for meeting SB 243’s requirements and cannot shift liability to the underlying AI provider.
Key Obligations for Companion Chatbot Operators
SB 243 imposes several affirmative obligations on operators of companion chatbots, aimed at transparency, user safety, and accountability. These requirements can be grouped into several core categories, outlined below.
The overarching goal of the California legislature is to ensure that users (and, where applicable, their guardians) understand that they are interacting with AI. SB 243 is specifically designed to prevent users, particularly minors, from being misled about the nature of their interactions.
1. Transparency and Disclosure Requirements
Operators must make sure that users clearly understand when they are interacting with AI rather than a real person. SB 243 introduces several transparency rules to support this goal.
Clear AI disclosure. If a chatbot could reasonably be mistaken for a human, the operator must clearly and prominently disclose that it is AI-generated and not a real person. This disclosure must be obvious and easy to see—such as an on-screen label or a direct message from the chatbot—and not buried in the terms of service.
Additional disclosure for minors. When an operator knows that a user is under 18, additional safeguards apply. At the beginning of the interaction, the minor must be clearly informed that they are chatting with AI. In practice, this requires age verification mechanisms and a clear notice such as: “This is an AI chatbot, not a human.”
Regular break reminders for minors. If a minor engages in a long or ongoing conversation with a companion chatbot, the system must periodically remind them that they are interacting with AI. At least every three hours, the chatbot must prompt the minor to take a break and reiterate that it is not human.
“Not suitable for minors” warning. Platforms must also display a general warning that companion chatbots may not be appropriate for some minors. This notice must be visible through the app, website, or other access point and should not be buried in fine print.
2. Safety Protocols and Content Controls
The law places a strong emphasis on preventing harmful content, particularly content related to self-harm or the exploitation of minors. Operators must implement and maintain robust safety measures.
Suicide and self-harm prevention. Before a companion chatbot is allowed to interact with users, companies must have safeguards in place to prevent the generation of content related to suicide or self-harm. At a minimum, if a user expresses suicidal thoughts or intent, the chatbot must interrupt or redirect the interaction and provide on-screen information directing the user to appropriate crisis resources.
Public disclosure of safety measures. Operators must publicly explain their safety practices on their websites, including how the chatbot handles sensitive topics such as suicide and self-harm.
Protection against sexual content for minors. When a chatbot interacts with a known minor, operators must take reasonable steps to prevent the generation of sexually explicit content or encouragement of sexual activity.
Evidence-based monitoring. Finally, SB 243 requires the use of evidence-based methods to detect and assess suicidal ideation, relying on established psychological and clinical research rather than ad hoc approaches.
3. Annual Reporting and Accountability
Beginning July 1, 2027, operators must submit an annual report to California’s Office of Suicide Prevention detailing their chatbot’s safety performance. The report must include crisis referral statistics, a summary of safety protocols, and a description of prohibitions on responding to suicidal ideation.
The report must include:
Crisis Referral Statistics - the number of times in the past year the chatbot issued a crisis referral notification to a user (i.e. how often the suicide/self-harm protocol was activated).
Summary of Safety Protocols - a description of the protocols in place to detect, remove, and respond to users’ expressions of suicidal ideation. This likely involves explaining content filters and response workflows.
Summary of Prohibitions - a description of the measures in place to prohibit the chatbot from responding to or engaging with suicidal ideation (essentially, how the chatbot is prevented from continuing a conversation on those topics).
Enforcement and Remedies for Non-Compliance
SB 243 carries real teeth in the form of enforcement mechanisms and legal remedies:
Private Right of Action: “Any person who suffers an injury in fact due to a company’s non-compliance can bring a civil action”. This creates a broad plaintiff pool. For example, a parent whose child was harmed by a non-compliant chatbot interaction could sue the operator.
Plaintiffs can seek injunctive relief and monetary damages. The statute sets statutory damages at $1,000 per violation or actual damages (whichever is greater), plus recovery of reasonable attorneys’ fees and costs. At $1,000 minimum per incident, potential liability could add up quickly (each improper interaction with a user might count as a separate violation).
The statute itself doesn’t prescribe a specific government agency enforcement action (aside from the Office of Suicide Prevention receiving reports). However, California’s Attorney General or local prosecutors could deem a violation of SB 243 an unlawful business practice, leading to investigations or enforcement under consumer protection laws.
Compliance Strategies
Given SB 243’s breadth and the looming threat of enforcement, companies deploying chatbots should take the following five proactive steps to ensure compliance:
Conduct an internal review to determine whether any of your AI systems fall within the statutory definition of a “companion chatbot.”
Ensure clear AI identification by integrating prominent labels or messages into the user interface for all covered chatbots.[50]
Work closely with engineering and AI teams to implement robust safety protocols.
Configure chatbots to restrict sexual or explicit content when interacting with minors.
Prepare for reporting obligations by developing internal procedures now to log and track the data required for the 2027 annual report.
Paths Forward
The adoption of SB 243 signals that the era of largely unregulated chatbots is coming to an end.
California once again leads the way by establishing a pioneering regulatory framework for AI companion chatbots, imposing duties that echo those found in privacy and consumer protection regimes. Given California’s influence, similar requirements may soon spread to other jurisdictions or even be adopted at the federal level.
More broadly, SB 243 represents a significant step toward mandating affirmative AI safety measures, moving beyond voluntary ethical guidelines to enforceable legal obligations. Companies that adapt to this new regulatory landscape will be better positioned as leaders in responsible AI, while those that lag behind risk both reputational harm and legal exposure.
Staying ahead of these emerging requirements is not only a legal necessity, but also a commitment to the well-being of users who increasingly turn to AI for companionship.



