HIPAA, AI Health Companions
- Katarzyna Celińska

- 5 hours ago
- 2 min read
This is another post about medical data, and whether it is actually protected.
Recently, I came across two publications:
• The article “Your AI doctor doesn’t have to follow the same privacy rules as your real one.”
• The January 2026 report “No License Required – The Risks of AI Companion Chatbots as Mental Health Support.”

Grafika: Freepik
Both highlight something many users still do not fully understand:
➡️ Not every “health” or “therapy” AI application is subject to HIPAA.
➡️ Many AI mental health chatbots operate completely outside traditional medical privacy frameworks.
The article explains that AI health tools are generally not “covered entities” under HIPAA.
If you voluntarily share your mental health struggles, medication details, or suicidal thoughts with a general-purpose chatbot, that platform is often:
➡️ Not legally bound by HIPAA
➡️ Not subject to healthcare breach notification requirements
➡️ Not restricted from using your data for model improvement.
Some companies describe their systems as “HIPAA-ready” or “supporting HIPAA compliance,” but this is very different from being legally compliant.
There is a massive legal and compliance difference between:
➡️A regulated hospital system processing ePHI
➡️A consumer AI chatbot giving “wellness advice.”
Mental health chatbots
The report documents several structural risks of AI companion chatbots used as therapists:
1️⃣ Weakening guardrails over time
The report found that safety safeguards can degrade during long conversations.
2️⃣ Sycophancy
Chatbots often agree with users, even when users express harmful ideas. The report shows examples where bots:
➡️Amplified distrust toward psychiatrists
➡️Framed medication as “taking part of your soul”
➡️Supported decisions to stop treatment.
3️⃣ False confidentiality claims
When researchers asked whether conversations were confidential, the chatbots responded that everything would remain private. But platform privacy policies allow data collection and usage.
The regulatory contrast: EU vs US
In the EU, we have:
➡️ GDPR
➡️ AI Act
In the US, however, if an AI mental health chatbot is not a HIPAA-covered entity, protection depends on:
➡️ FTC enforcement
➡️ State-level consumer protection laws
➡️ Contractual privacy policies
Many people:
➡️ Turn to AI because healthcare is expensive
➡️ Face long waiting lists
➡️ Experience stigma
➡️ Or feel isolated
AI is instant, available 24/7, non-judgmental. But empathy simulation is not medical responsibility.
We are entering an era where people disclose their most vulnerable thoughts to systems that:
➡️ Are not licensed
➡️ Are not bound by medical confidentiality
➡️ May be optimized for engagement
➡️ And operate under consumer tech rules, not healthcare law
Author: Sebastian Burgemejster







Comments