The AI Promise in Healthcare: What the Hospitals Won't Tell You
What the Hospitals Won't Tell You
By a
concerned patient advocate | March 2026
We have been told a very exciting story. Artificial intelligence is going to revolutionise healthcare. It will catch diseases earlier, help doctors make better decisions, reduce errors, and ultimately save lives. Hospitals have been rushing to adopt AI tools, and the technology companies selling these systems have made billions doing so.
Case 1: The Sepsis AI That Missed Most Sepsis Cases
Sepsis — when an infection spirals out of control and starts
attacking the body's own organs — kills hundreds of thousands of people every
year. Catching it early is everything. So when Epic Systems, the company that
manages health records for around 180 million Americans, said its AI could
predict sepsis before it became life-threatening, hospitals listened.
Think about what that means in practice. Nurses are flooded with warnings that turn out to be nothing, so they start tuning them out — a phenomenon called "alert fatigue." And when a real sepsis case walks through the door, the alarm the AI raises is statistically almost certainly false. That is not a safety net. It is a distraction.
There was also a near-miss that shows just how dangerous blind trust in AI can be. At a hospital in Nevada, a nurse received an AI-generated instruction to flood an elderly dialysis patient with intravenous fluids. For a dialysis patient — someone whose kidneys cannot process fluid properly — this could have filled the patient's lungs. The nurse refused. A doctor stepped in and prescribed a different treatment. The patient was likely saved by a human being ignoring what the computer said.
An investigation by STAT News also found that Epic was paying hospitals up to one million dollars to adopt its algorithms — creating a serious financial incentive to deploy a tool that had not been properly tested.
Case 2: The Algorithm That Denied Care to the Elderly
UnitedHealth Group is the largest health insurer in the
United States. A few years ago, it started using an AI tool called nH Predict —
developed by its subsidiary naviHealth — to decide how long elderly patients
could stay in nursing homes and rehabilitation centres after a hospital stay.
a computer was kicking vulnerable people out of care they needed, the company may have known the computer was usually wrong, and it counted on most people not pushing back.
Case 3: Cigna's 1.2-Second Medical Review
Cigna, one of America's largest health insurers, used an AI
system called PxDx to review insurance claims. The system would analyse claims
and reject them automatically — in bulk, before any human physician had laid
eyes on them.
Case 4: Humana Did the Same Thing
Humana, another major U.S. insurer, also faces lawsuits for
using the same nH Predict algorithm as UnitedHealth. The allegations are
similar: elderly patients being discharged from rehabilitation too early
because the AI said it was time to go, regardless of what their doctors
recommended. People who needed more time to recover from surgery, strokes, or
serious illness were sent home before they were ready.
Case 5: The AI That Treated Black Patients Differently
Perhaps the most troubling finding of all is one that did
not produce a single dramatic incident — because it was happening quietly and
constantly across the entire healthcare system.
Case 6: Another Sepsis AI, Another Problem
A separate sepsis prediction model — different from Epic's —
had the opposite problem. Instead of missing real cases, it flagged far too
many. Patients who did not have sepsis were treated as if they did. That means
unnecessary antibiotics, invasive procedures, longer hospital stays, higher
bills, and genuine physical risk. Too many false positives are not harmless.
They cause real harm of a different kind.
Case 7: The Company That Lied About Its AI's Accuracy
In Texas, the state Attorney General reached a settlement
with an AI healthcare technology company that had made false and misleading
claims about how accurate and safe its products were. The investigation found
that the company's own performance metrics were likely inaccurate — and that
hospitals had been buying tools based on promises the company could not back
up.
This is not a trivial matter. Hospitals make decisions about which AI tools to adopt based on the accuracy figures vendors provide. If those figures are false, patients are the ones who pay the price.
So Why Is This Happening?
The answer is not that AI is inherently bad or that everyone
involved is dishonest. The answer is that a powerful new technology is being
rushed into one of the most complex and high-stakes environments imaginable,
without adequate safeguards.
Consider these facts:
•
Only 16% of hospital executives said in 2023 that their
institution has a systemwide policy governing how AI is used and who can access
its data.
•
ECRI — one of the most respected patient safety
organisations in the world — ranked AI as the single biggest health technology
hazard for 2025. Its concerns included hallucinations (where AI confidently
produces wrong answers), racial bias, and the danger of clinicians placing too
much trust in AI outputs.
•
There is currently no robust, independent system for
testing healthcare AI before it goes into hospitals. Vendors often provide
their own accuracy figures. Hospitals often accept them.
•
The financial incentives are not aligned with patient
outcomes. Insurers benefit directly from denial algorithms that cut costs.
Vendors benefit from selling tools at scale. Hospitals receive payments to
adopt systems. Patients benefit from none of this.
What Should Patients Know?
None of this means you should distrust every doctor or
refuse every treatment. The vast majority of healthcare workers are doing their
best. But there are things worth knowing:
•
If an insurance company denies your claim, an AI may
have made that decision — or strongly influenced it. You have the right to
appeal, and appeals are won far more often than insurers would like you to
believe.
•
If you are being told to leave a hospital or care
facility sooner than you feel ready, ask specifically whether that decision has
been influenced by an algorithm. You are entitled to know.
•
If you are concerned about your treatment, ask
questions. A decision made by a computer is not the same as a decision made by
a doctor who has examined you.
•
If you experience something that feels wrong — a
prescription that seemed dangerous, a discharge that felt premature, a denial
that seemed unjustified — report it. To the hospital, to your insurer's
regulator, and if necessary, to your government representative.
The Bottom Line
AI has genuine potential in healthcare. No reasonable person
disputes that. But potential is not the same as performance. And right now, in
too many cases, AI in healthcare is performing poorly — missing diagnoses,
enabling denials, reinforcing bias — while generating enormous profits for the
companies deploying it.
The patients who were harmed by these systems did not sign up to be beta testers. They went to hospital expecting care. They deserved better accountability, better regulation, and better transparency about the tools being used to make decisions about their lives.
Comments
Post a Comment