The regulatory landscape for AI companions shifted more in the last six months than in the previous five years combined.
New York passed the world's first law specifically targeting AI companion operators. California followed with the most detailed companion chatbot law ever written. Utah passed its own version with near-unanimous support. A federal bill that would ban minors from AI companions entirely is in Senate committee. The EU's transparency rules become enforceable in August. Australia activated age verification requirements in March. The UK is fast-tracking amendments to extend its Online Safety Act to AI chatbots.
If you use an AI companion, these laws affect you. If you build one, they define your compliance obligations. This guide covers every active and pending regulation in one place.
New York: First in the World (November 2025)
New York was the first jurisdiction on Earth to regulate AI companion operators specifically. The AI Companion Models Law took effect November 5, 2025.
The law covers any system designed to simulate a sustained human or human-like relationship by remembering past interactions, asking emotion-based questions unprompted, and maintaining ongoing conversations about personal matters.
Operators must provide a conspicuous disclosure at the start of every session that the user is interacting with AI, not a human. A recurring reminder must be issued every three hours of continuous use. When suicidal ideation or self-harm is detected, the system must immediately refer the user to crisis service providers.
Penalties are significant: up to $15,000 per day per violation. All collected fines are directed to suicide prevention programs in New York.
Governor Hochul sent a formal letter to AI companion companies on the effective date notifying them that requirements were active. This signals an enforcement posture, not a symbolic gesture.
California SB 243: The Most Detailed Law (January 2026)
California's SB 243, signed by Governor Newsom on October 13, 2025, took effect January 1, 2026. It passed with overwhelming bipartisan support: Senate 33-3, Assembly 59-1.
For all users: if a reasonable person could be misled into believing they are interacting with a human, the operator must issue a clear notification that the chatbot is AI-generated.
For minors, the requirements are significantly stricter: a default notification at session start, a reminder every three hours to take a break, protocols preventing the chatbot from producing suicidal ideation or self-harm content, notification referring users to crisis providers when self-harm intent is expressed, reasonable measures to prevent sexually explicit visual material, and a prohibition on directly stating to minors that they should engage in sexually explicit conduct.
Starting in 2027, operators must report annually to California's Office of Suicide Prevention on how they detect, remove, and respond to suicide and self-harm content.
Unlike most tech regulations, SB 243 includes a private right of action. Any person injured by a violation may sue for the greater of actual damages or $1,000 per violation. This means enforcement comes from users, not just regulators.
Utah HB 438: Adding Advertising Restrictions (2026)
Utah's HB 438 passed its legislature with near-unanimous votes (House 68-1, Senate 26-1) and mirrors New York and California in structure but adds two notable elements.
First, it ties AI companion compliance directly to Utah's consumer data privacy law (UCPA), creating a bridge between companion-specific rules and broader data protection obligations.
Second, it restricts how companion chatbot operators may advertise to users. This is the first law to address the marketing practices of AI companion companies, not just their product behavior.
Penalties match California: private right of action with $1,000 statutory damages per violation.
Federal: The GUARD Act (In Committee)
The GUARD Act (Guidelines for User Age-verification and Responsible Dialogue Act) was introduced on October 28, 2025 by a bipartisan Senate coalition and endorsed by RAINN.
If passed, it would require mandatory age verification for all AI companion platforms using methods beyond self-declaration. If a user is determined to be a minor, the operator must prohibit access entirely. No AI chatbot may represent itself as a licensed medical, legal, financial, or psychological professional.
The definition of AI companion under the bill is broad: any system designed to foster interpersonal or emotional interaction with users, including friendship or therapeutic dialogue.
The GUARD Act is currently in the Senate Judiciary Committee. It has not received a floor vote. However, its bipartisan support and RAINN endorsement give it more momentum than most tech bills. Operators should treat it as a strong directional signal.
EU AI Act: Transparency Enforcement Begins August 2026
The EU AI Act entered into force in August 2024 with a staged rollout. The provisions governing chatbots and AI-generated content become enforceable on August 2, 2026.
Users must be informed they are interacting with an AI system unless it is obvious from context. AI-generated content that could be mistaken for human-created must be labeled with both human-visible AND machine-readable markers (embedded metadata).
Most AI companions are classified as 'limited risk' under the EU framework, meaning no pre-market approval is required. But the transparency obligations are mandatory.
Penalties for non-compliance: up to 35 million euros or 7% of global annual turnover, whichever is higher. The geographic scope covers any AI companion accessible to users in EU member states, regardless of where the operator is incorporated.
Australia: Age Verification Active Now (March 2026)
Australia's eSafety Commissioner activated six age verification codes on March 9, 2026. AI companions and generative AI services capable of producing restricted content (sexually explicit material, high-impact violence, or self-harm content) must verify that users are 18 or older.
Of 50 leading text-based AI chat services operating in Australia, only 9 had announced or implemented age assurance measures as of March 2026. The broader AI companion data privacy landscape in 2026 shows similar gaps in compliance across the industry. The eSafety Commissioner has warned it will use the full range of enforcement powers.
Penalties: up to AUD 49.5 million (approximately USD 31 million).
App stores face a later deadline of September 9, 2026 to enforce download restrictions on restricted apps.
United Kingdom: Fast-Tracking AI Chatbot Rules (2026)
On February 15, 2026, Prime Minister Starmer announced that AI chatbot providers using large language models will fall under the Online Safety Act's illegal content duties.
This means AI companions must not generate or facilitate access to illegal content, including child sexual abuse material, terrorist content, and certain categories of hate speech. Age verification and minor access restrictions are part of the extended framework.
Penalties: Ofcom can issue fines up to 10% of global annual turnover or 18 million pounds, plus criminal liability for senior managers.
A potential carve-out exists for chatbots that only allow users to interact with the AI itself (not other users), do not search multiple databases, and cannot generate pornographic content. This could exempt pure companion apps with restricted content settings, but the legal interpretation is still developing.
Apple's Platform-Level Enforcement
Apple is not a regulator, but its platform decisions function as de facto regulation for any app distributed through the App Store.
Starting February 24, 2026, users in Australia, Brazil, and Singapore cannot download apps rated 18+ unless verified as adults through Apple's system. Utah follows in May 2026. Louisiana follows in July 2026.
The Declared Age Range API allows developers to receive a verified age signal (minor or adult) without ever seeing the user's date of birth or government ID. Apple holds the verification. The developer receives only a categorical signal.
For AI companion operators, this means age verification in these markets is handled at the platform level. You do not need to build your own verification system. You do need to build the enforcement logic that acts on the age signal when it indicates a minor.
What This Means for Users
If you use an AI companion in the United States, you now have legal protections you did not have a year ago.
In New York and California, your companion must tell you it is AI. It must remind you every three hours. If you express suicidal thoughts, it must refer you to crisis services. If a minor is using the service, additional protections apply.
In the EU (starting August 2026), you must be informed you are talking to AI, and AI-generated content must be labeled. In Australia (now), if the service can produce restricted content, your age must be verified.
These protections exist because of documented harms. The lawsuits against Character.AI, the Replika incident, the deaths of minors who formed destructive attachments to AI chatbots. Regulation followed tragedy.
The laws are not perfect. Enforcement is inconsistent. Many companion apps are not yet compliant. But the direction is clear: the era of unregulated AI companionship is over.
What This Means for Builders
If you build AI companions, the compliance landscape is now multi-layered and international.
The most effective compliance strategy is to build for the strictest standard. California's SB 243 is currently the most detailed. If your product complies with SB 243 globally, you are substantially compliant everywhere except for EU-specific metadata requirements (August 2026) and jurisdiction-specific age verification methods.
The single most impactful design decision: 18+ only. Every pending and active regulation has its most burdensome requirements in the minor protection tier. An 18+ only product eliminates the majority of compliance complexity. This is not a business constraint. It is a strategic advantage. For users seeking platforms that already meet these standards, the Character.AI alternative comparison illustrates how different design philosophies produce different compliance postures.
KAi was designed as 18+ from day one. Not because we anticipated these regulations (though some were foreseeable), but because we believed that a wellness companion should be built for adults who can make informed decisions about their own wellbeing. The regulatory landscape has since validated that choice.
Frequently Asked Questions
Do AI companion regulations apply to me as a user?+
Is there a federal law regulating AI companions in the US?+
What happens if an AI companion company violates these laws?+
Does KAi comply with these regulations?+
Will more countries regulate AI companions?+
Built for the Regulations That Matter
KAi is 18+ only, privacy-first, and designed for the regulatory landscape that is now reality. Join the Beta.
