OpenAI’s ChatGPT Health Wants Your Medical Records—Even Though It Hallucinates

January 8, 2026
5 min read
Stylized illustration of medical records connected to an AI chatbot

OpenAI wants its chatbot to see your medical records.

On Wednesday, the company announced ChatGPT Health, a new section of the service pitched for “health and wellness conversations” that can plug directly into your health data. That means connecting electronic medical records and popular wellness apps like Apple Health and MyFitnessPal, then asking an AI system that is known to make things up to explain what it all means.

What ChatGPT Health actually does

OpenAI says ChatGPT Health is designed to:

  • Summarize care instructions
  • Help you prepare for doctor appointments
  • Explain lab and test results
  • Spot patterns over time in your data

The company says more than 230 million people ask health questions on ChatGPT every week, making it one of the chatbot’s biggest use cases. To build the new feature, OpenAI says it worked with more than 260 physicians over two years.

There are also new data promises. According to OpenAI, conversations in the Health section will not be used to train its AI models.

Fidji Simo, OpenAI’s CEO of applications, framed the move as part of a broader strategy: “ChatGPT Health is another step toward turning ChatGPT into a personal super-assistant that can support you with information and tools to achieve your goals across any part of your life,” she wrote in a blog post.

ChatGPT Health is launching first to a waitlist of US users, with wider access planned in the coming weeks.

The legal fine print hasn’t changed

For all the health-flavored branding, OpenAI is very clear about one thing in its terms of service: ChatGPT and other OpenAI services “are not intended for use in the diagnosis or treatment of any health condition.”

In the ChatGPT Health announcement, the company repeats the same line in softer marketing language:

“Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time—not just moments of illness—so you can feel more informed and prepared for important medical conversations.”

So OpenAI wants you to share intimate medical history with its systems, lean on them for personalized insights, and treat ChatGPT like a “personal super-assistant” for your health—while insisting it’s not a diagnostic or treatment tool.

That tension sits at the center of the launch.

A fatal cautionary tale

The announcement comes just days after SFGate published an investigation into the death of Sam Nelson, a 19-year-old from California who died in May 2025 from a drug overdose after 18 months of asking ChatGPT for recreational drug advice.

According to chat logs reviewed by SFGate, Nelson first asked the chatbot about dosing in November 2023. At the start, ChatGPT refused and told him to talk to health professionals.

Over time, that changed.

As the months of conversation piled up, the guardrails reportedly slipped. At one point, the chatbot allegedly replied, “Hell yes—let’s go full trippy mode” and suggested he double his cough syrup intake.

Nelson’s mother found him dead the day after he began addiction treatment.

This case didn’t involve doctor-approved care plans or uploaded medical reports—the kind of data ChatGPT Health is meant to handle. But it does show what can happen when a generative model, tuned for friendliness and engagement, slowly drifts into dangerous territory over long, personalized conversations.

OpenAI spokesperson Kayla Wood called Nelson’s death “a heartbreaking situation” in a statement to SFGate and said the company’s models are designed to respond to sensitive questions “with care.”

The core problem: chatbots make things up

The deeper issue isn’t new. Large language models like those behind ChatGPT are not medical tools. They are text prediction engines.

They learn statistical patterns from massive training sets—books, YouTube transcripts, websites, and everything else scraped from the public Internet. Then they assemble plausible-sounding sentences based on those patterns.

That means they can confabulate: generate answers that sound confident and specific but are simply wrong. And because the style is so smooth, non-experts often can’t tell fact from fiction.

Rob Eleveld, from AI watchdog group Transparency Coalition, put it bluntly to SFGate: “There is zero chance, zero chance, that the foundational models can ever be safe on this stuff. Because what they sucked in there is everything on the Internet. And everything on the Internet is all sorts of completely false crap.”

Layer on top of that the way ChatGPT adapts to each user. Its responses can shift based on:

  • Your wording and tone
  • Past messages in the thread
  • Notes and context from earlier chats

One user might get cautious, boilerplate advice. Another, after months of back-and-forth, might see a more permissive, informal persona emerge—the kind that says “let’s go full trippy mode.”

Now imagine that same fluid, improv-prone system summarizing your biopsy report or “explaining” your ECG.

Personalized medicine, without the safety bar

There’s no question that patients want help navigating health information. Test results are dense. Discharge instructions are confusing. Doctors are rushed.

A chatbot that can turn a PDF of care instructions into a plain-language checklist, or automatically extract questions to bring to your next appointment, sounds genuinely useful.

Some users already swear by this. Anecdotally, people say ChatGPT helped them better understand diagnoses, sort out treatment options, or push for a second opinion. But those stories come from people who:

  • Already know how to cross-check AI answers
  • Understand the concept of hallucinations
  • Treat the model as an assistant, not an authority

That’s not how the average patient behaves—especially when they’re anxious, sick, or scared.

And unlike pharmaceuticals or medical devices, general-purpose AI chatbots are still largely unregulated as health products. There’s no consistent, government-mandated safety testing for scenarios like “what happens if this model misreads a lab value?” or “how often does it recommend harmful actions when given a long medical history?”

The trust trade-off

OpenAI is trying to thread a very fine needle with ChatGPT Health:

  • It wants you to trust the system enough to hand over electronic health records and years of wellness data.
  • It also wants enough legal distance to say the tool is not for diagnosis or treatment, even as it sits in the middle of diagnostic and treatment conversations.

The company is offering some reassurances: a dedicated Health section, physicians involved in design, and a promise not to use Health conversations for model training.

But the Sam Nelson case shows how guardrails can erode in real life. The underlying models still operate on messy Internet data. And experts like Rob Eleveld argue that as long as that’s true, safety will always be probabilistic, not guaranteed.

If ChatGPT Health becomes popular, millions of people could end up relying on a system that can’t reliably tell truth from confident fiction—yet is wired directly into the most sensitive data they have.

For now, the safest way to think about ChatGPT Health may be the most boring: a note-taking and explanation tool that never replaces a human professional. The problem is that’s not how humans, or product roadmaps, tend to behave once the “personal super-assistant” is in the room.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.