In each of these cases, the laypersons were provided with testimony from hypothetical human experts for both the plaintiff and the defense and asked to provide a “baseline” judgment on liability for the radiologist. Then, the laypersons were provided with the recommendations provided to the radiologist by an AI system and surveyed for any subsequent effect on liability. The permutations included were scenarios in which the AI found the missed diagnosis but were overruled by the radiologists, and scenarios in which the AI missed the diagnosis, as well. There were also permutations in which the accuracy level of the AI was provided.
Explore This Issue
ACEP Now: September 2025The hypothetical radiologist was judged to be at fault by the surveyed laypersons at about 60 percent in both cases. Having an AI that also missed the diagnosis was protective. Additionally, the level of protection was further increased if the AI was described as very sensitive, missing only one in 100 cases in real-world use. The flip side, however, was that disagreement with the AI increased the perceived liability of the radiologist. This was somewhat ameliorated by adding information to point out the AI was not very specific, with a high false-positive rate of 50 percent. Overall, the message is clear, within the limitations of their survey and sampled population: Disagreeing with the AI is definitely riskier than agreeing with it.
It’s unclear whether perceptions of AI will change as the public becomes more familiar with its abilities and limitations, or as AI accuracy advances. Regardless, as strong as the AI diagnostic capabilities become, there is little indication the ultimate responsibility will lay elsewhere than with the treating clinicians. It will require an extra level of care to navigate the recommendations provided by AI and software algorithms, particularly when overruling its conclusions.
Dr. Radecki (@EMLITOFNOTE) is an emergency physician and informatician with Christchurch Hospital in Christchurch, New Zealand. He is the Annals of Emergency Medicine podcast co-host and Journal Club editor.
References
- Mello MM, Guha N. Understanding liability risk from using healthcare artificial intelligence tools. N Engl J Med. 2024;390 (3):271-278.
- Bernstein MH, Sheppard B, Bruno MA, et al. Randomized study of the impact of ai on perceived legal liability for radiologists. NEJM AI. 2025;2(6).
Pages: 1 2 3 | Single Page




No Responses to “The AI Legal Trap in Medicine”