Hospitals Are Turning Patient Portals into ChatGPT. That’s a Risky Shortcut.

April 14, 2026
5 min read
Doctor using a laptop while an AI chatbot interface appears beside an electronic health record screen

Hospitals Are Turning Patient Portals into ChatGPT. That’s a Risky Shortcut.

Americans are quietly doing something regulators feared: they’re already using generic AI chatbots as first‑line doctors. US hospitals have noticed—and instead of pulling people back into traditional care, many are racing to deploy their own branded bots inside patient portals. On paper, this looks like responsible damage control. In practice, it risks baking unproven technology into one of the world’s most fragile and expensive health‑care systems.

This piece looks at what’s actually being rolled out, why health systems are so eager, what could go wrong, and what Europe should learn before it copies the US experiment.


The news in brief

According to Ars Technica, several US health systems are launching or piloting AI chatbots that live directly inside patient portals and hospital apps.

Hartford HealthCare in Connecticut is rolling out “PatientGPT”, built with clinical AI startup K Health, to tens of thousands of existing patients. The system has two modes: a general Q&A assistant that may use some patient data, and a more structured “intake” mode that walks patients through symptom trees and suggests next steps, including urgent or emergency care. Internal stress‑testing reportedly reduced high‑risk failure rates from around 30 percent to 8.5 percent.

In parallel, electronic health‑record giant Epic is introducing “Emmie”, a more cautious assistant integrated into its MyChart portal. Emmie can summarize records, explain test results and answer general questions, but is explicitly not supposed to give personalised diagnoses or treatment recommendations. Early deployments are limited to a subset of patients at systems like Sutter Health and Reid Health.

These moves come as polls show roughly one‑third of US adults have already used public AI chatbots for health information, often sharing sensitive medical data and frequently not consulting a doctor afterwards.


Why this matters

The core problem these chatbots target is real: US health care is expensive, understaffed, and structurally hard to access. Tens of millions of Americans lack a primary‑care provider. Long waits, fragmented insurance networks, and high co‑pays make “Ask the internet” an appealing first step—even for serious symptoms.

From a hospital’s perspective, meeting patients “where they already are” in AI chat tools offers several advantages:

  • Patient retention and funnel control: If patients are going to talk to chatbots anyway, hospitals would prefer those conversations happen inside their own portals, tied to their own appointment systems and billing.
  • Cost pressure: An AI triage bot that safely deflects low‑complexity questions from call centres or emergency departments is financially attractive in a system built around fee‑for‑service and thin margins.
  • Data lock‑in: Keeping more of the patient’s digital journey inside the provider’s ecosystem reinforces dependence on that provider and its tech stack (especially when Epic is involved).

The losers, at least in the short term, are patients who mistake these tools for care itself. The Nature Medicine study Ars Technica cites is chilling: when researchers wrote precise, structured prompts, large language models correctly identified conditions most of the time. When ordinary people asked in their own words, accuracy collapsed.

That gap matters more in health care than almost any other domain. A 5–10 percent error rate in recommending movies is tolerable; an 8.5 percent failure rate in “high‑risk” medical scenarios is not obviously acceptable, especially when no one has defined what counts as a failure or how severe the harms might be.

The immediate risk is that hospitals use chatbots to patch over deep structural failures—lack of primary care, inequitable access, billing complexity—rather than address them.


The bigger picture

This wave of hospital chatbots sits at the intersection of three longer‑term trends.

1. The industrialisation of clinical time
For years, US providers have tried to industrialise medicine: standardised pathways, documentation templates, and decision‑support tools aimed at squeezing more patient visits into the same clinician schedule. LLM‑powered intake bots are the next step: they promise to pre‑structure the visit before the human ever enters the room.

Done well, this could reduce cognitive load and free clinicians for genuinely complex decisions. Done badly, it becomes another layer of “clickwork” and templated text, where doctors spend their time correcting AI‑generated summaries while still being held liable for any mistakes.

2. The failure of earlier digital‑health promises
We have been here before. Symptom checkers, telemedicine hotlines, and app‑based triage were all sold as ways to reduce emergency‑department crowding and lower costs. Evidence has been mixed at best. Some tools made access easier but also increased utilisation; some simply shifted burden from hospitals to patients without improving outcomes.

AI chatbots could repeat this pattern: raising expectations for 24/7 guidance while quietly pushing more unpaid self‑management onto patients.

3. Platformisation of health records
Epic’s Emmie is important not because of what it does today—a cautious helper—but because of what it could evolve into. Once the assistant sits between patients and their health records, it becomes the default interface. Over time, that interface can steer patients toward certain services, nudges, and even insurers’ preferred care pathways.

In that sense, Emmie is less like a clever FAQ and more like the early days of smartphone app stores: a new control layer that others will have to build on—or around.

Competitively, this puts smaller health systems and independent practices at a disadvantage. They will either adopt the dominant platforms’ AI tools, accept whatever risk models and incentives are baked in, or struggle to afford their own alternatives.


The European angle

For European readers, it is tempting to dismiss all this as a uniquely American problem—another symptom of a fragmented, insurance‑driven system. That would be a mistake.

First, the behavioural driver is universal: when access is slow or confusing, people turn to the easiest, most responsive channel. In countries with universal coverage but GP shortages—think parts of the UK, Germany, or rural Spain—the temptation to lean on AI triage will be strong.

Second, Europe’s regulatory stance is very different. Under GDPR, much of what these US bots do—processing sensitive health data with opaque models, potentially hosted by third‑country cloud providers—would trigger strict consent and data‑minimisation requirements. The upcoming EU AI Act is set to treat AI used in health as “high‑risk”, demanding rigorous risk management, transparency, and human oversight. A lightly monitored chatbot that handles emergency‑care decisions would be hard to justify under that regime.

Third, Europe already has digital front doors into health systems: NHS 111 online in the UK, national portals in the Nordics, Germany’s electronic patient record, or France’s Mon espace santé. It is easy to imagine LLM‑based assistants grafted onto these platforms in the next few years.

The question for European policymakers is not whether to use AI at the front line, but under what governance model: public, open, and independently audited—or vendor‑driven, closed, and owned by the same few US tech suppliers that already dominate cloud and productivity software.


Looking ahead

The next 24–36 months will be decisive for health‑care chatbots.

In the US, three outcomes are worth watching:

  1. Real‑world safety data: Will systems like PatientGPT publish peer‑reviewed evidence on outcomes—missed diagnoses, unnecessary ER visits avoided, patient satisfaction—rather than internal red‑teaming metrics? Without this, regulators and insurers are flying blind.
  2. Liability cases: The first malpractice suit involving an AI triage recommendation will shape behaviour quickly. If courts treat the chatbot as part of the provider’s duty of care, hospitals will either invest heavily in oversight or retreat to “information only” assistants like Emmie.
  3. Workforce impact: Will clinicians feel that AI helpers reduce message overload and repetitive questions, or that they introduce new kinds of noise and second‑guessing? Provider burnout is already high; poorly integrated AI will make it worse.

In Europe, the timeline is tied to regulation and procurement cycles. National health systems move slowly, but once they standardise on a vendor or model, that decision can last a decade. Early pilots in university hospitals or regional e‑health programs will set patterns that others copy.

For patients and citizens, the practical questions are simpler:

  • Who built this chatbot, and who pays for it?
  • What data is it using—and can I opt out?
  • Is it explicitly labelled as not a doctor, or is it quietly expected to stand in for one at 2 a.m. on a Sunday?

The opportunity is real: better preparation for visits, clearer explanations of lab results, and quicker signposting to the right level of care. But without transparency, strong human oversight, and honest communication about limitations, that opportunity turns into a high‑tech illusion of access.


The bottom line

Hospital‑branded AI chatbots are not inherently bad; they are probably safer than asking a random public model about chest pain. But they are also not a substitute for fixing the underlying access and equity failures in health systems—especially the US system that is driving this trend.

Europe still has time to design something better: AI assistance that is evidence‑based, regulated as high‑risk, and deployed to strengthen—not replace—the human relationships that make care work. The real question is whether policymakers and providers will use that time, or simply copy America’s latest quick fix.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.