Menu

AI Companion Privacy in 2026: Is Your Data Safe?

300 million messages leaked. 17 apps with critical flaws. Most companions collect your data for advertising. Here is what to look for and what to avoid.

Carlos KiKFounder & ArchitectApril 6, 202610 min read
A translucent shield of light protecting a small glowing core against a dark background with scattered data particles

In January 2026, an AI chat app called Chat & Ask leaked 300 million messages from 25 million users. The database had been publicly accessible since the app launched. Messages included suicide requests, personal confessions, drug synthesis instructions, all tied to real user identities.

That same month, security researchers found 17 popular Android AI companion apps with a combined 150 million installs containing 14 critical security flaws and 311 high-severity vulnerabilities. One app with 10 million downloads had an OpenAI API key and a Google Cloud private key hardcoded in its code.

In May 2025, Italy fined Replika 5 million euros for collecting data from children without age verification, with a privacy policy only available in English referencing US law irrelevant in Italy.

MIT Technology Review called the privacy risks of AI companions 'a feature, not a bug' because the intimacy that makes them valuable is the same intimacy that makes the data so sensitive and so commercially exploitable.

If you use an AI companion, this article is the guide to understanding what is actually happening with your data.


What Your AI Companion Knows About You

AI companions collect more intimate data than almost any other category of app.

Think about what you share in a conversation with a companion: your emotional state, your relationships, your health concerns, your fears, your hopes, your daily experiences. This is not browsing history or purchase data. This is the interior of your life, expressed in your own words.

A Surfshark study found that 4 out of 5 AI companion apps in the Apple App Store collect user or device IDs for targeted advertising. Character.AI collects 14 distinct data types for analytics, including contact information, identifiers, location, and usage data. If you are looking for a Character.AI alternative that prioritizes privacy, the differences in data collection are stark. Nearly half of all AI chatbot apps collect location data.

The data is not just stored. It is processed, analyzed, and in many cases shared with third parties. Some companion apps explicitly reserve the right to sell data to brokers or use it for targeted advertising. Others share conversation data with LLM providers whose own data practices may differ from what the companion app promises.

The result: the most personal things you say to an AI companion may be the least private things you say anywhere.


The Breach That Exposed Everything

The Chat & Ask leak in January 2026 is the largest AI companion data breach on record, and it happened because of the simplest possible misconfiguration.

The app developer, a Turkish company called Codeway, configured Google Firebase with public security rules. This meant any authenticated user could read, modify, or delete the entire database. 300 million messages. 25 million users. Wide open.

The same security researcher who discovered the Chat & Ask leak scanned 200 iOS apps and found 103 had identical Firebase misconfigurations, collectively exposing tens of millions of additional stored files.

This was not a sophisticated attack. It was not a zero-day exploit. It was a checkbox that was set to 'public' instead of 'private'. And it exposed the most intimate conversations millions of people have ever had with a machine.

The lesson is not that one company made a mistake. The lesson is that the entire AI companion ecosystem was built fast and secured slowly. Understanding what happens when your AI companion disappears or gets breached is essential context. The security posture of most companion apps reflects the priorities of their development: ship the product, attract users, worry about security later. For 25 million Chat & Ask users, 'later' arrived too late.

The same researcher scanned 200 iOS apps and found 103 had identical Firebase misconfigurations. The Chat & Ask leak was not an outlier. It was the one that got caught.


How Major Companions Handle Your Data

Not all AI companions are equal in their data practices. Here is what the major platforms actually do, based on their privacy policies and independent audits.

Character.AI collects the most data of any major companion app: 14 data types for analytics, contact information, identifiers, location, and usage data for tracking. The company updated its privacy policy in August 2025 to state it may report users to law enforcement if conversations appear to involve a crime. It is also the subject of multiple lawsuits and state investigations for data practices involving minors.

Replika states it does not sell personal data or share conversation content with advertisers. However, it transmits conversation data to third-party LLM providers, and usage data can be shared with marketing firms. Italy fined Replika 5 million euros for the gap between its policy claims and its actual practices. Data is retained up to 90 days after account deletion.

ChatGPT (OpenAI) retains chat history indefinitely for free and Plus users unless manually deleted. Deletion takes up to 30 days. The memory feature stores cross-conversation facts until the user clears them. Anthropic (Claude) announced in September 2025 that consumer conversation data is now used for training unless users opt out, with data retained up to 5 years.

Nomi stands out by collecting zero data for tracking purposes (compared to Character.AI's 14 types). It collects 7 data types for analytics. However, it may anonymize conversation content for model training and lacks end-to-end encryption.


The Privacy Evaluation Checklist

Before sharing anything personal with an AI companion, evaluate the platform on these criteria. If the answers are unclear or the information is not available, that tells you everything.

Non-negotiable requirements:

1. Does the app explicitly state whether conversations are stored, for how long, and whether they are used for model training? If this information is buried in legal jargon or absent entirely, walk away.

2. Is there an opt-out mechanism for training data use? Many apps have no opt-out. Your conversations become training data by default.

3. Does the app use age verification? The absence of age verification is not just a safety issue. It is a legal liability that signals the company prioritizes growth over responsibility.

4. Are conversations encrypted in transit and at rest? End-to-end encryption means even the company cannot read your messages. Most companion apps do not offer this.

5. Is the privacy policy available in your language? A privacy policy only in English is a red flag for international users.

Tracking and advertising signals:

Check the App Store's 'App Privacy' section. Look for 'Data Used to Track You.' If the companion collects device IDs, user IDs, or location data for tracking purposes, your conversations are funding an advertising profile.

Security signals:

If the app accepts passwords like '1' or '12345', the security posture is not serious. If there is no multi-factor authentication option, the account is one phished password away from exposure.

The fundamental rule:

Assume anything you type into an AI companion could appear in a data breach. Share accordingly. If you would not write it on a postcard, think twice about typing it into a chat window with unclear privacy practices.


How KAi Approaches Privacy Differently

KAi was designed with a fundamentally different premise: your conversations should not exist on a server after they have been processed.

The 24-hour conversation scrub means that raw conversation transcripts are processed through Experiential Memory Architecture (EMA), which extracts understanding and meaning. The transcript is then deleted. Within 24 hours, the words you typed no longer exist anywhere.

What persists is comprehension, not conversation. KAi remembers what matters about you: your context, your preferences, your history of growth. She does not remember the specific words you used to express them.

This design eliminates the entire category of risk that Chat & Ask demonstrated. There is no database of 300 million messages to leak because the messages do not persist. There is no conversation history to subpoena because it does not exist. There is no training data to opt out of because your conversations are not used for training.

Additionally:

- KAi's data is never used to train AI models, not ours, not anyone else's - KAi's data is never sold, period. No outside investors means no pressure to monetize user data - KAi is 18+ only, eliminating the regulatory and ethical minefield of minor data collection - The DHC team uses KAi every day. We are our own users, and we built the privacy model we would want for ourselves.

The most private AI companion is not the one with the best encryption. It is the one that does not store your conversations in the first place.


The Regulatory Landscape

Regulators are starting to catch up, but enforcement remains inconsistent. For a complete breakdown, see the full guide to AI companion regulations in 2026.

California SB 243 (effective January 1, 2026) requires AI companion platforms to disclose when users could reasonably believe they are talking to a human and to implement anti-self-harm protocols.

New York requires AI companion companies to create safeguards and report expressions of suicidal ideation.

The EU has the strongest enforcement record. The Replika fine of 5 million euros is the first major AI companion privacy enforcement action anywhere in the world. The EU AI Act, in full enforcement since January 2026, classifies AI companions as high-risk systems subject to transparency and accountability requirements.

The FTC launched an inquiry in September 2025 into companies operating consumer-facing AI chatbots, specifically examining data practices with children and teens.

The gap that remains: No regulator in any jurisdiction has taken enforcement action specifically for application-layer security flaws in AI companion apps. The Chat & Ask breach, which was caused by a basic misconfiguration rather than a policy violation, fell into a regulatory blind spot. The laws address what companies do with data intentionally. They do not yet adequately address what happens when companies fail to secure it.


Frequently Asked Questions

Are my conversations with AI companions private?+
It depends entirely on the platform. Most AI companions store your conversations indefinitely and may use them for model training, share them with third-party providers, or collect data for advertising. Check the app's privacy policy and the App Store privacy label before sharing anything personal.
Can AI companion companies read my conversations?+
In most cases, yes. Unless the app offers end-to-end encryption (most do not), the company and its employees can technically access your conversation data. Some platforms state they only access data for safety reviews or when required by law, but the technical capability exists.
What happened in the Chat & Ask data breach?+
In January 2026, 300 million messages from 25 million users were exposed due to a Firebase misconfiguration that left the entire database publicly accessible since the app launched. The messages included suicide requests, personal confessions, and sensitive content tied to real user identities.
Does KAi store my conversations?+
No. KAi processes conversations through Experiential Memory Architecture (EMA) to extract understanding, then deletes the raw transcript within 24 hours. The understanding persists permanently, but the words you typed do not exist on any server after the scrub cycle completes.
How do I check if an AI companion app is safe to use?+
Check the App Store privacy label for 'Data Used to Track You.' Read the privacy policy for data retention, training use, and third-party sharing. Look for end-to-end encryption, age verification, and opt-out mechanisms. If any of this information is missing or unclear, treat the app as unsafe for personal conversations.

Privacy That Is Not a Policy. It Is an Architecture.

KAi deletes your conversations every 24 hours. The understanding stays. The transcript does not. Join the Beta.

Sources & References

  1. Malwarebytes (2026). AI chat app leak exposes 300 million messages tied to 25 million users. Malwarebytes Blog.
  2. Surfshark (2025). AI Companions and Privacy Study. Surfshark Research.
  3. EDPB (2025). Italian SA fines company behind Replika chatbot. European Data Protection Board.
  4. MIT Technology Review (2025). The State of AI: Chatbot Companions and the Future of Our Privacy. MIT Technology Review.
  5. Android Headlines (2026). Why AI Girlfriend Apps Are a Security Nightmare (2026 Study). Android Headlines.
  6. 404 Media (2026). Massive AI Chat App Leaked Millions of Users' Private Conversations. 404 Media.

Continue Reading