OpenAI is formalizing one of ChatGPTās most common use cases: asking about your health.
The company announced ChatGPT Health on January 7, a dedicated space inside ChatGPT for health and wellness conversations. OpenAI says users are already doing this at massive scale ā over 230 million people ask health and wellness questions on ChatGPT each week.
A separate lane for health chats
Until now, a question about a rash or a nutrition plan lived in the same thread as your coding help and recipe ideas. ChatGPT Health changes that.
According to OpenAIās announcement, the new product:
- Silos health conversations away from your other chats, so health context doesnāt bleed into general prompts.
- Keeps health context out of standard chats by default. If you start discussing symptoms or treatment plans outside the Health section, the AI will nudge you to switch over.
- Can still reference what youāve done in the regular ChatGPT experience. If you previously asked for a marathon training plan, ChatGPT Health can treat you as a runner when you later talk about fitness goals.
That last point shows how OpenAI is trying to walk a line: health data is kept in its own lane, but the system still uses broader context when it thinks it helps.
Plugging into Apple Health and other wellness apps
OpenAI is also pushing deeper into personal data ā if you let it.
ChatGPT Health will be able to integrate with your personal information or medical records from wellness and fitness apps, including:
- Apple Health
- Function
- MyFitnessPal
The company says these integrations can help ChatGPT Health tailor guidance to what you actually do: how much you move, what you log, how you sleep.
OpenAI also stresses a key privacy promise: conversations inside ChatGPT Health will not be used to train its models.
OpenAIās pitch: fix what healthcare canāt
Fidji Simo, OpenAIās CEO of Applications, frames ChatGPT Health as a response to longāstanding problems in healthcare. In a blog post, she points to:
- Cost and access barriers
- Overbooked doctors
- Lack of continuity in care
The pitch is clear: if you canāt get timely, consistent answers from a human clinician, maybe an AI assistant can fill some of that gap.
The tension: powerful advice from a system that hallucinates
But using large language models for medical advice comes with obvious risks.
OpenAI itself notes that models like ChatGPT donāt reason about what is true. They generate the most likely next words, not the medically correct ones, and they are prone to hallucinations.
Those limits are written directly into OpenAIās own rules. In its terms of service, the company says ChatGPT is ānot intended for use in the diagnosis or treatment of any health condition.ā
That creates a builtāin tension: ChatGPT Health is clearly designed to talk about health ā and has hooks into fitness and wellness data ā while the company simultaneously warns users not to treat it as a diagnostic tool.
Rolling out soon
OpenAI says ChatGPT Health will roll out in the coming weeks.
Whatās still unclear is how clinicians, regulators, and users will respond to a product that lives precisely in medicineās gray area: highly personal conversations about health, mediated by a system that can be incredibly helpful ā until itās confidently wrong.



