Skip to main content
Learn / Talks & Presentations / MSRC 2026
MSRC Conference 2026 April 10, 2026 Kalispell, MT

AI in Healthcare: Operational Value,
Limitations, and Oversight

A framework for evaluating any AI tool in healthcare — built for respiratory therapists and allied health professionals who need to think critically about the AI that is already in their workflow, whether they know it or not.

DE
David Eitel
RRT · MHA · MSRT · RRT-ACCS
Downloads — Free to share
Reference card and slide deck from the April 2026 MSRC Conference presentation
The Core Argument

Operations AI vs. Clinical AI — Ask Which Column First

When someone says “AI in healthcare,” the most important question is not whether it works. The question is: which layer of the system does it touch? Operations AI and Clinical AI are fundamentally different problems with different risk profiles, different validation requirements, and different failure modes.

Operations AI — scheduling, billing, prior authorization processing, ambient documentation — is already deployed at scale in major health systems. The risk of a scheduling error and the risk of a vent weaning error are not the same category of harm. Operations AI handles high-volume, structured, repeatable tasks where failure is recoverable. It has real ROI and it is already running.

Clinical AI touches diagnosis, treatment recommendations, deterioration prediction, and dosing decisions. It requires prospective validation, population-specific data, and a governance structure that most institutions do not yet have. The scrutiny is categorically different — and the oversight requirement does not transfer to the algorithm.

The Evaluation Framework

Three Questions for Any AI Tool

These three questions apply at a vendor demo, a department pilot, and a governance committee. Put them in your back pocket before the next AI pitch.

Evaluate Any AI Tool
Q1
Which layer does this touch?
Operations, Clinical, Financial, or Regulatory? The layer determines risk level and the oversight requirements that follow from it.
Q2
What happens when it is wrong?
A scheduling error and a vent weaning error are not the same failure mode. Consequence shapes the oversight needed. If the answer is vague, that is a red flag.
Q3
Who owns the outcome?
The vendor contract says it is not them. Your RT credential carries the responsibility — it does not transfer to the algorithm, regardless of what the product literature implies.
Insurance AI — Already Deployed

What Is Reviewing Your Prior Auths

This is not a future concern. Algorithmic prior authorization review is operating at scale now. Algorithms screen requests against coverage criteria before any human sees them. Denials are auto-generated. The review window is measured in seconds.

300K
Claims denied by Cigna’s algorithm in a two-month period in 2022
1.2s
Average algorithmic review time per claim before any human review
90%
nH Predict denial rate reported in some Medicare Advantage cohorts

The clinical implication is direct: knowing this changes how you write prior authorizations. Include explicit SpO&sub2; values. Use objective thresholds. Mirror the criteria language. Under two seconds of algorithmic review means keywords matter — that is a clinical documentation skill in 2026, not an administrative one.

* ProPublica investigation, 2023: propublica.org
** Senate Permanent Subcommittee on Investigations, 2024 report on Medicare Advantage AI prior authorization
Warning Signs

Red Flags in Any AI Pitch

Clinical outcomes + 90-day timeline
Clinical outcomes promised with operational ease and a 90-day go-live. That gap between ambition and validation timeline is the risk.
“Human in the loop” without workflow design
Every vendor says there is a human in the loop. Ask how that works at volume. No answer means the loop is theoretical.
No answer on FDA SaMD clearance
For any clinical AI touching diagnosis or treatment: is this FDA-cleared as Software as a Medical Device? A vague answer is an answer.
Validated on a different population
Training data from a different institution or demographic does not transfer automatically. Local validation is not optional for clinical AI.
Vendor disclaims all clinical liability
Read the contract. If they disclaim all responsibility for clinical outcomes, the full liability lands on the institution and the clinician.
Alert volume designed to impress
If the demo metric is how many alerts fire rather than how many are actionable, the product has automated alert fatigue, not solved it.
RT Practice Implications

Three Priorities for Respiratory Therapy

RT-specific ambient documentation tools are 18 to 24 months out from broad deployment. The question to ask your institution now is whether a pilot is underway and whether RT workflows are in scope. Being in the room before implementation beats being trained on a system built without RT input.

Revenue cycle AI is disrupting the prior authorization layer right now. This directly affects RT authorization workflows for ventilators, high-flow oxygen, and pulmonary rehabilitation. The documentation skill described above is not theoretical — it is already the difference between approvals and denials.

Predictive deterioration AI is directly RT-relevant. When your institution evaluates it, an RT should be in the evaluation room as a clinical evaluator, not just a training session attendee. Demand population-specific validation data. Your skepticism is professional judgment, not resistance to innovation.