The legal landscape surrounding the use of artificial intelligence (AI) within the medical setting remains in a state of flux. There are a multitude of issues regarding liability yet to be tested, balancing the responsibility for errors between the clinician, the vendor of an AI product, and the hospital deploying an AI product. Each of these players has the potential to be included in the wide net typically cast during early malpractice filings.
Explore This Issue
ACEP Now: September 2025That said, the ultimate responsibility is far more likely to come to rest on the clinician involved. Medical AI, despite its occasional superhuman-like performance on various demonstrations of diagnostic skill, is still simply a software product. The bar for shifting responsibility onto a software product requires a different legal test, part of which involves demonstration of a specific, unaddressed defect resulting in harms. A review of cases published last year in the New England Journal of Medicine reveals synopses of several potentially illustrative examples.1
Duty of Care Relationship
One of these examples is very reminiscent of the way modern AI tools are being deployed in contemporary use. In Sampson v. HeartWise Health Systems Corporation, the software in question was a suite of tests and algorithmic interpretation used as screening for cardiovascular disease. In the case presented, a patient with a family history of early cardiac death visited a clinic using the HeartWise algorithm. After collection of various clinical data, including an ECG and ECG images, these data points were evaluated by the proprietary HeartWise software. A recommendation was provided by the software, in this case, of normal baseline cardiovascular risk for the patient.
The physicians working at the clinic reviewed this report and provided recommendations to the patient concordant with the normal result provided by HeartWise. Subsequently, the patient suffered sudden cardiac death from left ventricular hypertrophy, leading to the survivors filing suit against parties including HeartWise.
As with all legal cases, there were nuances and limitations of scope specific to the individual arguments filed. The courts ruled against many of the claims brought against HeartWise by the plaintiffs, including that of medical negligence. However, in dismissing the negligence claim, the primary line of reasoning involved whether the HeartWise developer ever directly entered into a “duty of care” relationship with the deceased.
The court ruled such a relationship only existed between the deceased and the clinicians interpreting the HeartWise recommendation, not with the developer itself. Therefore, the claim for negligence rests solely against the clinic and the clinicians interpreting and conveying the recommendations from the HeartWise product. Of note, the case did not specifically address whether the HeartWise product itself had been negligently designed, a potentially valid scope, per the Circuit Court decision. The court notes this may have been a valid line of argument but abstained from commenting on any case-specific merits.
Pages: 1 2 3 | Single Page




No Responses to “The AI Legal Trap in Medicine”