The AI Promise in Healthcare: What the Hospitals Won't Tell You


What the Hospitals Won't Tell You

By a concerned patient advocate  |  March 2026

We have been told a very exciting story. Artificial intelligence is going to revolutionise healthcare. It will catch diseases earlier, help doctors make better decisions, reduce errors, and ultimately save lives. Hospitals have been rushing to adopt AI tools, and the technology companies selling these systems have made billions doing so.

 But there is another story — one that rarely makes the headlines. It is a story of algorithms that missed two out of every three sepsis cases. Of elderly patients being kicked out of nursing homes because a computer said so. Of insurance companies using AI to deny care before a single doctor ever read the claim. Of a system that consistently underestimated how sick Black patients were compared to white patients.


Case 1: The Sepsis AI That Missed Most Sepsis Cases

Sepsis — when an infection spirals out of control and starts attacking the body's own organs — kills hundreds of thousands of people every year. Catching it early is everything. So when Epic Systems, the company that manages health records for around 180 million Americans, said its AI could predict sepsis before it became life-threatening, hospitals listened.

 They should have asked more questions.

 When researchers at Michigan Medicine independently tested the Epic Sepsis Model on nearly 40,000 real patients, the results were alarming. The algorithm missed 67% of actual sepsis cases entirely. And of every alert it did fire — the warnings telling nurses and doctors to act — 88% turned out to be false alarms.

Think about what that means in practice. Nurses are flooded with warnings that turn out to be nothing, so they start tuning them out — a phenomenon called "alert fatigue." And when a real sepsis case walks through the door, the alarm the AI raises is statistically almost certainly false. That is not a safety net. It is a distraction.

There was also a near-miss that shows just how dangerous blind trust in AI can be. At a hospital in Nevada, a nurse received an AI-generated instruction to flood an elderly dialysis patient with intravenous fluids. For a dialysis patient — someone whose kidneys cannot process fluid properly — this could have filled the patient's lungs. The nurse refused. A doctor stepped in and prescribed a different treatment. The patient was likely saved by a human being ignoring what the computer said.

An investigation by STAT News also found that Epic was paying hospitals up to one million dollars to adopt its algorithms — creating a serious financial incentive to deploy a tool that had not been properly tested.

 

Case 2: The Algorithm That Denied Care to the Elderly

UnitedHealth Group is the largest health insurer in the United States. A few years ago, it started using an AI tool called nH Predict — developed by its subsidiary naviHealth — to decide how long elderly patients could stay in nursing homes and rehabilitation centres after a hospital stay.

 The results, according to a U.S. Senate investigation, were stark. UnitedHealthcare's denial rate for post-hospital care more than doubled between 2020 and 2022 — the period after nH Predict was introduced. The AI was overruling doctors who had examined the patients and said they needed more time to recover.

 A class-action lawsuit, which a federal judge allowed to proceed in February 2025, makes an extraordinary claim: that the company knew the algorithm had a 90% error rate. Meaning nine out of ten of its denials, when appealed, were overturned. But only 2 to 3% of patients ever appealed. They were elderly, often unwell, and did not know how to fight the system. The insurer, allegedly, knew this too.

a computer was kicking vulnerable people out of care they needed, the company may have known the computer was usually wrong, and it counted on most people not pushing back.

 

Case 3: Cigna's 1.2-Second Medical Review

Cigna, one of America's largest health insurers, used an AI system called PxDx to review insurance claims. The system would analyse claims and reject them automatically — in bulk, before any human physician had laid eyes on them.

 When physicians did review cases — often after patients had already been denied — they were reportedly spending an average of 1.2 seconds per case. Not minutes. Seconds. Barely enough time to read a patient's name, let alone their medical history.

 A lawsuit has been filed. The core allegation is that this was never really a medical review at all — it was an automated denial machine with a rubber stamp on top.

 

Case 4: Humana Did the Same Thing

Humana, another major U.S. insurer, also faces lawsuits for using the same nH Predict algorithm as UnitedHealth. The allegations are similar: elderly patients being discharged from rehabilitation too early because the AI said it was time to go, regardless of what their doctors recommended. People who needed more time to recover from surgery, strokes, or serious illness were sent home before they were ready.


Case 5: The AI That Treated Black Patients Differently

Perhaps the most troubling finding of all is one that did not produce a single dramatic incident — because it was happening quietly and constantly across the entire healthcare system.

 Multiple studies have found that AI triage tools — the systems hospitals use to decide how urgent a patient's condition is — consistently underestimated how sick Black patients were compared to white patients presenting with similar symptoms. The algorithms were trained on historical data that already reflected decades of racial bias in healthcare. They learned that bias. They reproduced it. And they quietly made Black patients wait longer, receive fewer resources, and face greater risk.

 This was not an accident or a one-off failure. It was a systemic flaw baked into widely used tools across U.S. hospitals.

 

Case 6: Another Sepsis AI, Another Problem

A separate sepsis prediction model — different from Epic's — had the opposite problem. Instead of missing real cases, it flagged far too many. Patients who did not have sepsis were treated as if they did. That means unnecessary antibiotics, invasive procedures, longer hospital stays, higher bills, and genuine physical risk. Too many false positives are not harmless. They cause real harm of a different kind.

 

Case 7: The Company That Lied About Its AI's Accuracy

In Texas, the state Attorney General reached a settlement with an AI healthcare technology company that had made false and misleading claims about how accurate and safe its products were. The investigation found that the company's own performance metrics were likely inaccurate — and that hospitals had been buying tools based on promises the company could not back up.

This is not a trivial matter. Hospitals make decisions about which AI tools to adopt based on the accuracy figures vendors provide. If those figures are false, patients are the ones who pay the price.

 

So Why Is This Happening?

The answer is not that AI is inherently bad or that everyone involved is dishonest. The answer is that a powerful new technology is being rushed into one of the most complex and high-stakes environments imaginable, without adequate safeguards.

Consider these facts:

 

        Only 16% of hospital executives said in 2023 that their institution has a systemwide policy governing how AI is used and who can access its data.

        ECRI — one of the most respected patient safety organisations in the world — ranked AI as the single biggest health technology hazard for 2025. Its concerns included hallucinations (where AI confidently produces wrong answers), racial bias, and the danger of clinicians placing too much trust in AI outputs.

        There is currently no robust, independent system for testing healthcare AI before it goes into hospitals. Vendors often provide their own accuracy figures. Hospitals often accept them.

        The financial incentives are not aligned with patient outcomes. Insurers benefit directly from denial algorithms that cut costs. Vendors benefit from selling tools at scale. Hospitals receive payments to adopt systems. Patients benefit from none of this.

 

What Should Patients Know?

None of this means you should distrust every doctor or refuse every treatment. The vast majority of healthcare workers are doing their best. But there are things worth knowing:

 

        If an insurance company denies your claim, an AI may have made that decision — or strongly influenced it. You have the right to appeal, and appeals are won far more often than insurers would like you to believe.

        If you are being told to leave a hospital or care facility sooner than you feel ready, ask specifically whether that decision has been influenced by an algorithm. You are entitled to know.

        If you are concerned about your treatment, ask questions. A decision made by a computer is not the same as a decision made by a doctor who has examined you.

        If you experience something that feels wrong — a prescription that seemed dangerous, a discharge that felt premature, a denial that seemed unjustified — report it. To the hospital, to your insurer's regulator, and if necessary, to your government representative.

 

The Bottom Line

AI has genuine potential in healthcare. No reasonable person disputes that. But potential is not the same as performance. And right now, in too many cases, AI in healthcare is performing poorly — missing diagnoses, enabling denials, reinforcing bias — while generating enormous profits for the companies deploying it.

The patients who were harmed by these systems did not sign up to be beta testers. They went to hospital expecting care. They deserved better accountability, better regulation, and better transparency about the tools being used to make decisions about their lives.

 That is not an argument against technology. It is an argument for honesty about what the technology is actually doing.

 Sources: STAT News independent analysis of Epic Sepsis Model · U.S. Senate investigation into UnitedHealth Group · ProPublica reporting on Cigna PxDx · ECRI Institute Health Technology Hazard Report 2025 · Federal court filings (UnitedHealth class-action, Feb 2025) · Texas Attorney General settlement · Published peer-reviewed studies on racial bias in triage algorithms

Comments