OpenAI is pushing ChatGPT closer to the doctor’s office—without calling it a doctor.
The company has launched ChatGPT Health, a new health-focused experience inside ChatGPT that lets users ask medical questions and connect personal health data, while repeatedly warning that the tool is not meant for diagnosis or treatment. It’s a carefully framed move into one of tech’s most sensitive territories.
A Separate Space for Health Questions
ChatGPT Health appears as a dedicated, sandboxed tab inside ChatGPT, with its own chat history and memory, separate from regular conversations.
According to OpenAI, this separation is intentional. Health conversations tend to be deeply personal, and the company says this environment offers stronger privacy protections than standard chats.
If a user starts discussing medical issues outside the Health tab, ChatGPT will actively suggest moving the conversation into the protected space.
Connecting Medical Records—With Guardrails
OpenAI is encouraging users to connect medical records and wellness apps to get more personalized responses.
Supported integrations include Apple Health, MyFitnessPal, Peloton, Weight Watchers, and Function Health.
For medical records, OpenAI partnered with b.well, which already connects with roughly 2.2 million healthcare providers. The company says this allows users to upload lab results, visit summaries, and clinical histories without OpenAI directly handling provider-level systems.
The feature is launching via a waitlist and will roll out gradually to users worldwide, regardless of subscription tier.
Why OpenAI Keeps Saying ‘Not a Doctor’
OpenAI is unusually explicit about what ChatGPT Health is not.
The company repeatedly stresses that the tool is “not intended for diagnosis or treatment.” That language reflects growing scrutiny around AI-generated medical advice, especially after several high-profile incidents where chatbots produced unsafe or misleading health guidance.
OpenAI acknowledges it can’t fully control how people use AI once they leave the chat—but it says the system is tuned to avoid alarmist language and to direct users toward healthcare professionals when action is needed.
Mental Health: Included, Carefully
While OpenAI’s launch blog barely mentioned mental health, executives confirmed the product does support mental health conversations.
During a media briefing, OpenAI’s head of applications Fidji Simo said mental health is “part of health in general” and that ChatGPT Health can engage in those discussions—while prioritizing appropriate responses during moments of distress and pointing users toward professional help or trusted people in their lives.
The company says it has added safeguards to reduce the risk of worsening anxiety or encouraging harmful behavior.
Privacy, Encryption, and Legal Limits
OpenAI says ChatGPT Health uses additional layers of encryption and that health conversations are not used to train its foundation models by default.
However, this is not end-to-end encrypted data. And in the event of valid legal requests or emergencies, OpenAI may still be required to provide access to user data.
That caveat matters, particularly given OpenAI’s past security incident in 2023 that briefly exposed limited user information across accounts.
HIPAA Doesn’t Apply—And That’s Important
When asked about HIPAA compliance, OpenAI made its position clear: consumer AI tools like ChatGPT Health are not covered by HIPAA, which applies to clinical healthcare providers and insurers.
In practical terms, ChatGPT Health sits closer to a sophisticated health information assistant than a regulated medical service.
Why This Move Matters
OpenAI says more than 230 million people already ask ChatGPT health-related questions every week—often outside normal clinic hours or in regions with limited healthcare access.
ChatGPT Health doesn’t change that behavior. It formalizes it.
By offering a dedicated space, deeper context through connected data, and clearer safety boundaries, OpenAI is trying to position ChatGPT as a healthcare “ally”—not a replacement for doctors.
Whether users respect that distinction may determine whether this experiment becomes a trusted tool or another cautionary tale in AI-powered health tech.
Conclusion
ChatGPT Health is OpenAI’s most careful step yet into personal healthcare—powerful, constrained, and deliberately unfinished. It’s useful by design, limited by intent, and risky if misunderstood.
That balance will now be tested in the real world.