Your Employees Think the AI Therapist Is Confidential. It Isn't.

Cover Image for Your Employees Think the AI Therapist Is Confidential. It Isn't.

Your company's HR team has good intentions. They bought a subscription to an AI mental health platform, deployed it to 500 employees, sent a benefit announcement with "Your Wellbeing Matters" in the subject line, and checked a box on the annual wellness initiative. Somewhere in the onboarding deck, someone said the company takes privacy seriously.

They probably do. But that's not the same as HIPAA protection. And most of the employees who opened the chatbot and started typing about their anxiety, their sleep problems, and their conflicts with their manager don't know the difference.

What HIPAA Actually Covers

HIPAA — the Health Insurance Portability and Accountability Act — covers covered entities: healthcare providers, health plans, and healthcare clearinghouses, plus their direct business associates. A clinical therapy practice using an electronic health records system is covered. The EHR vendor is covered as a business associate.

A general-purpose chatbot with an empathetic persona, offered through an HR portal, is not.

The regulatory line isn't about sensitivity. It's about who is transmitting health information in what institutional context. Consumer wellness apps — including AI chatbots marketed as mental health tools — typically fall outside HIPAA's reach. They're governed primarily by their own privacy policies and, increasingly, a patchwork of state consumer protection laws that vary widely in strength.

The Federal Trade Commission issued guidance in 2023 on digital health data practices, but the FTC framework is not a clinical protection. It penalizes deceptive practices; it doesn't impose HIPAA's breach notification requirements, data use restrictions, or the minimum-necessary standard for health information access.

The 48.7% Problem

Spring Health's 2026 Workplace Mental Health Trends report found that 48.7 percent of U.S. adults used a general-purpose large language model for mental health support over the previous year. Only 18.5 percent used a purpose-built clinical mental health application.

That gap matters because the users in the 48.7 percent group are often interacting with tools that feel clinical — structured conversations, empathetic tone, follow-up prompts, sessions that feel coherent and contained — without clinical protections. The products are designed to feel like therapy. The data governance doesn't match.

Users routinely disclose specific, sensitive information in these sessions: diagnoses they've never shared with their employer, medication details, relationship breakdowns, safety concerns. They do it because the design invites it and because the implicit expectation — that talking to an AI about your mental health is private — feels intuitive, especially in a platform their employer provided. Employer provision reads as a stamp of legitimacy.

It isn't always wrong to trust these tools. It's rarely guaranteed. And the difference between "the company seems to take this seriously" and "this conversation has HIPAA-level protection" is a gap that employees have no way to see from the inside.

California SB 243 and What It Can't Do

As of January 1, 2026, California's SB 243 imposed new requirements on AI mental health applications operating in the state. The law requires clinical-grade AI tools to implement crisis detection protocols, disclose AI involvement to users, and maintain data governance standards closer to clinical norms than typical consumer apps.

It's a meaningful development. It also has structural limits.

SB 243 applies to one state. It applies to tools specifically marketed as mental health applications — which excludes many HR-deployed general chatbots that include a "wellness check-in" feature alongside productivity tracking. A tool described as an "employee experience platform" with a mental health module may not trigger the law's applicability even if employees use that module to disclose serious distress.

The broader legislative picture in early 2026 looks like more than 240 AI mental health bills introduced across 43 states, most of them still in committee. The regulatory framework is actively forming, which means employers who deployed tools in 2024 and 2025 may be operating under standards that state legislatures are in the process of deciding were insufficient. Companies that treated the pre-SB-243 environment as settled are watching the ground shift.

The Liability Employers Don't Know They Own

When an employee discloses suicidal ideation to an AI chatbot deployed by their employer and the chatbot responds inadequately — fails to escalate, provides generic reassurance, or simply continues the conversation — the legal picture is murky and actively contested. Some employment attorneys have begun arguing that employer-provided AI wellness tools create a duty of care. Others are examining whether inadequate crisis detection constitutes negligence under occupational safety frameworks.

This is not speculative risk management. It's the early shape of litigation that will become more defined over the next several years as disclosures accumulate and gaps become visible in court filings.

The data liability is more immediate. If employee mental health disclosures are stored in a vendor's infrastructure without clinical-grade protections, a data breach exposes information that employees believed was confidential. The employer's contractual exposure depends on the language of the vendor agreement. The reputational exposure doesn't depend on anything except the headline.

HR teams that signed vendor agreements in 2023 should be reviewing the data retention clauses with counsel this quarter — not in response to an incident, but in anticipation of regulatory frameworks that are arriving.

What a Clinical-Grade Tool Actually Looks Like

The distinction between a consumer chatbot with therapeutic tone and a clinical AI application isn't obscure once you know what to check. Clinical-grade platforms — Spring Health, Lyra Health, and similar companies — operate as covered entities or business associates under HIPAA and execute Business Associate Agreements with the employers they serve. They have crisis detection protocols with documented escalation paths: specific triggers, specific response procedures, specific handoff to licensed clinicians. Data retention and access policies are written to clinical data governance standards, not consumer app terms of service.

The UX can look similar across clinical and non-clinical tools. The backend is different. And the difference matters most precisely when employees are most vulnerable — during disclosures that involve safety, during periods of acute distress, during the conversations that employees believed were protected.

Three questions HR teams should require vendors to answer in writing before deploying any AI mental health tool: Is there a Business Associate Agreement? What is the documented crisis response protocol, and who is responsible for executing it? Who has access to conversation data, and under what retention schedule is it stored? If the vendor can't answer those questions with specificity, the tool is not clinical-grade — regardless of how the sales deck describes it.

Quiet Burnout Has a Pattern described how high-performing employees mask distress until it's invisible. The AI mental health blindspot is the same failure mode at the systemic level: organizations believe they've addressed employee mental health because they deployed a tool. What they've actually done is created a channel where employees may disclose vulnerability they'll later need to be protected — and assumed the protection is there without checking whether it is.

The chatbot feels like a door. HR hasn't looked at what's on the other side.


Photo by Anna Shvets via Pexels.