Logo

Log In Sign Up |  An official publication of: American College of Emergency Physicians
Navigation
  • Home
  • Multimedia
    • Podcasts
    • Videos
  • Clinical
    • Airway Managment
    • Case Reports
    • Critical Care
    • Guidelines
    • Imaging & Ultrasound
    • Pain & Palliative Care
    • Pediatrics
    • Resuscitation
    • Trauma & Injury
  • Resource Centers
    • mTBI Resource Center
  • Career
    • Practice Management
      • Benchmarking
      • Reimbursement & Coding
      • Care Team
      • Legal
      • Operations
      • Quality & Safety
    • Awards
    • Certification
    • Compensation
    • Early Career
    • Education
    • Leadership
    • Profiles
    • Retirement
    • Work-Life Balance
  • Columns
    • ACEP4U
    • Airway
    • Benchmarking
    • Brief19
    • By the Numbers
    • Coding Wizard
    • EM Cases
    • End of the Rainbow
    • Equity Equation
    • FACEPs in the Crowd
    • Forensic Facts
    • From the College
    • Images in EM
    • Kids Korner
    • Medicolegal Mind
    • Opinion
      • Break Room
      • New Spin
      • Pro-Con
    • Pearls From EM Literature
    • Policy Rx
    • Practice Changers
    • Problem Solvers
    • Residency Spotlight
    • Resident Voice
    • Skeptics’ Guide to Emergency Medicine
    • Sound Advice
    • Special OPs
    • Toxicology Q&A
    • WorldTravelERs
  • Resources
    • ACEP.org
    • ACEP Knowledge Quiz
    • Issue Archives
    • CME Now
    • Annual Scientific Assembly
      • ACEP14
      • ACEP15
      • ACEP16
      • ACEP17
      • ACEP18
      • ACEP19
    • Annals of Emergency Medicine
    • JACEP Open
    • Emergency Medicine Foundation
  • About
    • Our Mission
    • Medical Editor in Chief
    • Editorial Advisory Board
    • Awards
    • Authors
    • Article Submission
    • Contact Us
    • Advertise
    • Subscribe
    • Privacy Policy
    • Copyright Information

The AI Legal Trap in Medicine

By Ryan Patrick Radecki, MD, MS | on August 14, 2025 | 0 Comment
Pearls From the Medical Literature
  • Tweet
  • Click to email a link to a friend (Opens in new window) Email
Print-Friendly Version

The legal landscape surrounding the use of artificial intelligence (AI) within the medical setting remains in a state of flux. There are a multitude of issues regarding liability yet to be tested, balancing the responsibility for errors between the clinician, the vendor of an AI product, and the hospital deploying an AI product. Each of these players has the potential to be included in the wide net typically cast during early malpractice filings.

You Might Also Like
  • ACEP Member Medical-Legal Survey Results
  • ACEP Fighting for Medical Liability Reform
  • New Mexico Supreme Court Ruling Helps Ensure Patient Care Across State Lines
Explore This Issue
ACEP Now: September 2025

That said, the ultimate responsibility is far more likely to come to rest on the clinician involved. Medical AI, despite its occasional superhuman-like performance on various demonstrations of diagnostic skill, is still simply a software product. The bar for shifting responsibility onto a software product requires a different legal test, part of which involves demonstration of a specific, unaddressed defect resulting in harms. A review of cases published last year in the New England Journal of Medicine reveals synopses of several potentially illustrative examples.1

Duty of Care Relationship

One of these examples is very reminiscent of the way modern AI tools are being deployed in contemporary use. In Sampson v. HeartWise Health Systems Corporation, the software in question was a suite of tests and algorithmic interpretation used as screening for cardiovascular disease. In the case presented, a patient with a family history of early cardiac death visited a clinic using the HeartWise algorithm. After collection of various clinical data, including an ECG and ECG images, these data points were evaluated by the proprietary HeartWise software. A recommendation was provided by the software, in this case, of normal baseline cardiovascular risk for the patient.

The physicians working at the clinic reviewed this report and provided recommendations to the patient concordant with the normal result provided by HeartWise. Subsequently, the patient suffered sudden cardiac death from left ventricular hypertrophy, leading to the survivors filing suit against parties including HeartWise.

As with all legal cases, there were nuances and limitations of scope specific to the individual arguments filed. The courts ruled against many of the claims brought against HeartWise by the plaintiffs, including that of medical negligence. However, in dismissing the negligence claim, the primary line of reasoning involved whether the HeartWise developer ever directly entered into a “duty of care” relationship with the deceased.

The court ruled such a relationship only existed between the deceased and the clinicians interpreting the HeartWise recommendation, not with the developer itself. Therefore, the claim for negligence rests solely against the clinic and the clinicians interpreting and conveying the recommendations from the HeartWise product. Of note, the case did not specifically address whether the HeartWise product itself had been negligently designed, a potentially valid scope, per the Circuit Court decision. The court notes this may have been a valid line of argument but abstained from commenting on any case-specific merits.

Flawed Software

Contrasting with this is the case of Lowe v. Cerner Corporation. In this instance, a patient had been hospitalized for gallbladder surgery. After surgery, the surgeon placed an order in the electronic health record for continuous pulse oximetry because of the patient’s baseline chronic respiratory conditions. Because of a known defect in the computerized physician order entry system, the order for pulse oximetry was forward-dated to next morning and was not conveyed to the postoperative care team. During the intervening unmonitored period, the patient suffered a respiratory arrest and a hypoxic brain injury.

Differing from the HeartWise case, the courts ultimately allowed this case against the software vendor to proceed. Crucially, in this instance, the argument was made regarding a specific negligent design within the software product itself. This provides some cover for the surgeon, whose legal team argued that the problem lay in a flaw in the software because it wasn’t up to “industry standard.” Further, the flaw was known to the company, allowing the case to move forward under a “failure to warn” line of reasoning.

It should be noted that both cases began in District Court, but had judgments reversed in Circuit and Supreme Court, illustrating the complexity of interpreting the underlying issues. The underlying point, however, is the demonstration of the very narrow instance in which the liability can be shifted from the clinician to the software product itself.

Perceived Liability

Unfortunately, the burden on the clinician in this modern age is further complicated by the potential perceptions of any software algorithm or AI by a jury at trial. The aforementioned illustrative cases involve distribution of potential liability, but this only leads to a subsequent question regarding the effect of AI on liability decisions. An interesting research study, published in NEJM AI, looks at these issues in the context of radiology, one of the specialties at the forefront of AI augmentation.2

In this study, surveyed laypersons were asked to provide their opinion regarding legal liability for two hypothetical cases. In the first case, a hypothetical radiologist was the defendant in a stroke case in which an intracranial hemorrhage was missed. As part of the hypothetical patient’s care, intravenous thrombolytics were administered. Because of the missed hemorrhage, the patient suffered hematoma expansion and subsequent brain injury. In the second case, a radiologist failed to detect an abnormality suspicious for lung cancer. As a result of the delayed diagnosis, the patient suffered premature death because of lost opportunity for treatment.

In each of these cases, the laypersons were provided with testimony from hypothetical human experts for both the plaintiff and the defense and asked to provide a “baseline” judgment on liability for the radiologist. Then, the laypersons were provided with the recommendations provided to the radiologist by an AI system and surveyed for any subsequent effect on liability. The permutations included were scenarios in which the AI found the missed diagnosis but were overruled by the radiologists, and scenarios in which the AI missed the diagnosis, as well. There were also permutations in which the accuracy level of the AI was provided.

The hypothetical radiologist was judged to be at fault by the surveyed laypersons at about 60 percent in both cases. Having an AI that also missed the diagnosis was protective. Additionally, the level of protection was further increased if the AI was described as very sensitive, missing only one in 100 cases in real-world use. The flip side, however, was that disagreement with the AI increased the perceived liability of the radiologist. This was somewhat ameliorated by adding information to point out the AI was not very specific, with a high false-positive rate of 50 percent. Overall, the message is clear, within the limitations of their survey and sampled population: Disagreeing with the AI is definitely riskier than agreeing with it.

It’s unclear whether perceptions of AI will change as the public becomes more familiar with its abilities and limitations, or as AI accuracy advances. Regardless, as strong as the AI diagnostic capabilities become, there is little indication the ultimate responsibility will lay elsewhere than with the treating clinicians. It will require an extra level of care to navigate the recommendations provided by AI and software algorithms, particularly when overruling its conclusions.


Dr. RadeckiDr. Radecki (@EMLITOFNOTE) is an emergency physician and informatician with Christchurch Hospital in Christchurch, New Zealand. He is the Annals of Emergency Medicine podcast co-host and Journal Club editor. 

 

References

  1. Mello MM, Guha N. Understanding liability risk from using healthcare artificial intelligence tools. N Engl J Med. 2024;390 (3):271-278.
  2. Bernstein MH, Sheppard B, Bruno MA, et al. Randomized study of the impact of ai on perceived legal liability for radiologists. NEJM AI. 2025;2(6).

Pages: 1 2 3 | Multi-Page

Topics: AIClinical Decision ToolsLegalLiabilityMalpracticePatient SafetyRisk

Related

  • Q&A with ACEP President L. Anthony Cirillo

    November 5, 2025 - 0 Comment
  • How Evidence-Based Medicine Strengthens Your Malpractice Defense

    October 28, 2025 - 0 Comment
  • Overcoming Language Barriers in the Emergency Department

    October 21, 2025 - 0 Comment

Current Issue

ACEP Now: November 2025

Download PDF

Read More

No Responses to “The AI Legal Trap in Medicine”

Leave a Reply Cancel Reply

Your email address will not be published. Required fields are marked *


*
*


Wiley
  • Home
  • About Us
  • Contact Us
  • Privacy
  • Terms of Use
  • Advertise
  • Cookie Preferences
Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. ISSN 2333-2603