Logo

Log In Sign Up |  An official publication of: American College of Emergency Physicians
Navigation
  • Home
  • Multimedia
    • Podcasts
    • Videos
  • Clinical
    • Airway Managment
    • Case Reports
    • Critical Care
    • Guidelines
    • Imaging & Ultrasound
    • Pain & Palliative Care
    • Pediatrics
    • Resuscitation
    • Trauma & Injury
  • Resource Centers
    • mTBI Resource Center
  • Career
    • Practice Management
      • Benchmarking
      • Reimbursement & Coding
      • Care Team
      • Legal
      • Operations
      • Quality & Safety
    • Awards
    • Certification
    • Compensation
    • Early Career
    • Education
    • Leadership
    • Profiles
    • Retirement
    • Work-Life Balance
  • Columns
    • ACEP4U
    • Airway
    • Benchmarking
    • Brief19
    • By the Numbers
    • Coding Wizard
    • EM Cases
    • End of the Rainbow
    • Equity Equation
    • FACEPs in the Crowd
    • Forensic Facts
    • From the College
    • Images in EM
    • Kids Korner
    • Medicolegal Mind
    • Opinion
      • Break Room
      • New Spin
      • Pro-Con
    • Pearls From EM Literature
    • Policy Rx
    • Practice Changers
    • Problem Solvers
    • Residency Spotlight
    • Resident Voice
    • Skeptics’ Guide to Emergency Medicine
    • Sound Advice
    • Special OPs
    • Toxicology Q&A
    • WorldTravelERs
  • Resources
    • ACEP.org
    • ACEP Knowledge Quiz
    • Issue Archives
    • CME Now
    • Annual Scientific Assembly
      • ACEP14
      • ACEP15
      • ACEP16
      • ACEP17
      • ACEP18
      • ACEP19
    • Annals of Emergency Medicine
    • JACEP Open
    • Emergency Medicine Foundation
  • About
    • Our Mission
    • Medical Editor in Chief
    • Editorial Advisory Board
    • Awards
    • Authors
    • Article Submission
    • Contact Us
    • Advertise
    • Subscribe
    • Privacy Policy
    • Copyright Information

Artificial Intelligence in the ED: Ethical Issues

By Kenneth V. Iserson, MD, MBA; Eileen F. Baker, MD, PHD; Paul L. Bissmeyer, JR., DO; Arthur R. Derse, MD, JD; Haley Sauder, MD; and Bradford L. Walters, MD | on January 3, 2024 | 1 Comment
Features
  • Tweet
  • Email
Print-Friendly Version

Artificial intelligence (AI) may radically alter the provision of emergency medicine (EM) over the coming decades. Before it does, we must consider this game-changing technology’s effect on emergency physicians and their patients. As we become increasingly dependent on AI, emergency physicians may lose their professional autonomy, decision-making abilities, and technical skills. Complex AI programs may become so embedded in the emergency-medicine decision-making process that emergency physicians may not be able to explain or understand them, and patients may not be able to refuse its use or refute its findings, even when ethical dilemmas are present.1

You Might Also Like
  • The Impact of Artificial Intelligence in the Emergency Department
  • If Physicians Embrace Artificial Intelligence, We Can Make It Work for Us
  • Updated ACEP Member Survey Finds Changes in Ethical Issues in Emergency Medicine
Explore This Issue
ACEP Now: Vol 43 – No 01 – January 2024

Overreliance on AI

Overreliance on AI clinical decision aids may lead to a decline in emergency physicians’ diagnostic and decision-making skills, potentially compromising patient care if the emergency physician does not recognize an erroneous AI response. AI programs’ ability to interpret medical imaging and cytopathology often exceed human capacities to perform repetitive, complex, or intricate tasks without fatigue. Advocates assert that “losing certain skills to AI, much like the advent of calculators and the internet, is not only inevitable but also beneficial to human progress.”1

However, the erosion of human expertise is problematic if the decline in physicians’ diagnostic and decision-making skills results in clinical errors if (or when) the technology fails.2 Educators find it challenging to instill essential critical-thinking skills when students rely on AI tools to solve problems for them.3 Emergency physicians may become human mechanics, performing procedures, giving medications, and admitting patients only when instructed to do so by the AI program. Many fear that a declining emphasis on critical thought and the basic skills that comprise the art of medicine not only will compromise patient care, but also will contribute to an anti-intellectual “dumbing down” of medical practice.4

AI has the potential to reduce the amount of time needed for many repetitive tasks in the practice of the emergency physician. AI can analyze patterns of patient care and recommend more efficient patient throughput. The emergency physician might spend the time saved with patients, resulting in better patient satisfaction. However, that same increase of efficiency might result in a demand from the health care system for increased patient throughput, resulting in emergency physicians having to see more patients rather than spending more time with them.5

AI may suggest diagnoses for complex constellations of symptoms and findings, a potential boon to emergency physicians. But the danger is that, instead of consulting the literature for complex cases, the emergency physician will regard AI as an authoritative source. Emergency physicians may feel that they are giving up autonomy both in their choice of how to practice and in sorting complex and challenging analyses that are ceded to AI. Rather than helping with burnout, AI may instead become a demanding taskmaster and an enigmatic diagnostic and treatment standard for the emergency physician, leading to understandable resistance to AI by emergency physicians.

Pages: 1 2 3 | Single Page

Topics: Artificial IntelligenceEthics

Related

  • Can This Patient Leave Against Medical Advice?

    March 10, 2025 - 0 Comment
  • Texas Hospitals Now Must Ask About Immigration Status

    March 10, 2025 - 0 Comment
  • Waiting Room Medicine: The Ethical Conundrum

    March 9, 2025 - 2 Comments

Current Issue

ACEP Now May 03

Read More

One Response to “Artificial Intelligence in the ED: Ethical Issues”

  1. January 8, 2024

    Todd B Taylor, MD, FACEP Reply

    Analysis presented in this article is germane to healthcare for the half of the world’s population that has access to it. For the other 4.5 billion people, AI may become their sole source for the full spectrum of healthcare services, including preventative, diagnostic, therapeutic, and behavioral heath healthcare. To withhold this emerging technology from those who might otherwise have none, seems narrow minded & unwarranted. As is a common consequence of American healthcare, forcing the “perfect be the enemy of the good” need not be perpetuated to all populations.
    To that end, one can imagine “proceduralists” (doing the necessary hands on work) aided by AI bringing healthcare to underserved & unserved individuals. Perhaps sooner than later, healthcare kiosks will have the ability to perform even sophisticated diagnostics & then deliver therapy (e.g. medication) even without the benefit of a human practitioner. Certainly some will suffer from incorrect diagnosis or prescribed therapy. But, that also happens on a regular basis in all sorts of healthcare settings today.
    Technology continues to eliminate entire swaths of the services industry. Trucking & transportation will soon no longer require a human. AI will also give you that perfect haircut. Traditional grocery stores will soon be replaced by machine picked items, delivered to your door by autonomous vehicles in less than an hour.
    As usual, healthcare will lag behind other industries, but technology will slowly chip away at services where AI advancements provide superior results. Diagnostic radiology & pathology are ripe for the picking.
    Emergency Medicine may be further down that list, but automated triage & other parts of the ED process will make what we do now look like the typewriter . . . “type-what” said “Gen Alpha”. And, who knows what Gen Beta (2025-2039) will have never heard of. As a “Boomer” myself I’ll probably be dead by then, but hopefully not a victim of MedAI.

Leave a Reply Cancel Reply

Your email address will not be published. Required fields are marked *


*
*

Wiley
  • Home
  • About Us
  • Contact Us
  • Privacy
  • Terms of Use
  • Advertise
  • Cookie Preferences
Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. ISSN 2333-2603