Logo

Log In Sign Up |  An official publication of: American College of Emergency Physicians
Navigation
  • Home
  • Multimedia
    • Podcasts
    • Videos
  • Clinical
    • Airway Managment
    • Case Reports
    • Critical Care
    • Guidelines
    • Imaging & Ultrasound
    • Pain & Palliative Care
    • Pediatrics
    • Resuscitation
    • Trauma & Injury
  • Resource Centers
    • mTBI Resource Center
  • Career
    • Practice Management
      • Benchmarking
      • Reimbursement & Coding
      • Care Team
      • Legal
      • Operations
      • Quality & Safety
    • Awards
    • Certification
    • Compensation
    • Early Career
    • Education
    • Leadership
    • Profiles
    • Retirement
    • Work-Life Balance
  • Columns
    • ACEP4U
    • Airway
    • Benchmarking
    • Brief19
    • By the Numbers
    • Coding Wizard
    • EM Cases
    • End of the Rainbow
    • Equity Equation
    • FACEPs in the Crowd
    • Forensic Facts
    • From the College
    • Images in EM
    • Kids Korner
    • Medicolegal Mind
    • Opinion
      • Break Room
      • New Spin
      • Pro-Con
    • Pearls From EM Literature
    • Policy Rx
    • Practice Changers
    • Problem Solvers
    • Residency Spotlight
    • Resident Voice
    • Skeptics’ Guide to Emergency Medicine
    • Sound Advice
    • Special OPs
    • Toxicology Q&A
    • WorldTravelERs
  • Resources
    • ACEP.org
    • ACEP Knowledge Quiz
    • Issue Archives
    • CME Now
    • Annual Scientific Assembly
      • ACEP14
      • ACEP15
      • ACEP16
      • ACEP17
      • ACEP18
      • ACEP19
    • Annals of Emergency Medicine
    • JACEP Open
    • Emergency Medicine Foundation
  • About
    • Our Mission
    • Medical Editor in Chief
    • Editorial Advisory Board
    • Awards
    • Authors
    • Article Submission
    • Contact Us
    • Advertise
    • Subscribe
    • Privacy Policy
    • Copyright Information

Artificial Intelligence in the ED: Ethical Issues

By Kenneth V. Iserson, MD, MBA; Eileen F. Baker, MD, PHD; Paul L. Bissmeyer, JR., DO; Arthur R. Derse, MD, JD; Haley Sauder, MD; and Bradford L. Walters, MD | on January 3, 2024 | 1 Comment
Features
  • Tweet
  • Click to email a link to a friend (Opens in new window) Email
Print-Friendly Version

As AI becomes involved in clinical EM decisions, patient autonomy and shared decision making might suffer. If emergency physicians rely solely on AI-generated suggestions based only on objective data, they are likely to recommend treatments or interventions that are not consistent with the patient’s values and preferences.

You Might Also Like
  • The Impact of Artificial Intelligence in the Emergency Department
  • If Physicians Embrace Artificial Intelligence, We Can Make It Work for Us
  • Updated ACEP Member Survey Finds Changes in Ethical Issues in Emergency Medicine
Explore This Issue
ACEP Now: Vol 43 – No 01 – January 2024

An example of this is shared decision-making regarding hospitalization in moderate-risk HEART score chest pain. Currently, the emergency physician may calculate the HEART score and then take the data to the patient for a discussion in which the patient is able to heavily influence their follow-up plan. Such shared decision making succeeds because the provider understands and can share with the patient how the data was applied and how the statistics and risks were generated. As AI models become more complex, clinicians may not be able to clearly discuss why the recommendations are being made, and patients may no longer be able to rely on the basis for their emergency physicians’ recommendations to make an informed decision.

Implementation of new technology within the medical field forces consideration of how patients and physicians will interact with it. A rarely discussed but vital ethical issue is that emergency physicians must remain aware that, when patients prefer to have humans interacting with them rather than an algorithm, they should maintain the right to refuse its application in their care. Emergency physicians must provide patients with sufficient information (e.g., inclusion, consequences, and significance) so that they can decide whether they will allow AI to be part of their care.6 Such consent necessarily requires that AI cannot be so embedded in the EM process that its use cannot be refused; patients must be able to challenge or refuse an AI-generated recommendation. This helps ensure that the humanistic nature of medicine prevails, and EM care is tailored to patient preferences and values.

AI’s role in patient-care decisions involving ethical dilemmas, including those about the end of life, is unclear and problematic. In the early stages of AI development, and for decades to come, trained professionals, usually emergency physicians, will need to provide counseling to patients and families. AI cannot replace physician input in the nuanced and complex ethical decisions that need to be made. However, AI may be able to help frame questions that can guide physicians in determining therapies and predicting mortality. For example, in patients at a high risk of death within six months, AI helped to reduce the use of chemotherapy by three percent.7 A study of AI-triggered palliative-care decisions found a higher use of palliative-care consultations and a reduced hospital readmission rate.8 AI will undoubtedly be useful in providing emergency physicians with ethical guidance, but it cannot make ethical decisions itself.

Pages: 1 2 3 | Single Page

Topics: Artificial IntelligenceEthics

Related

  • Let Core Values Help Guide Patient Care

    November 5, 2025 - 0 Comment
  • Search with GRACE: Artificial Intelligence Prompts for Clinically Related Queries

    October 9, 2025 - 3 Comments
  • AI Scribes Enter the Emergency Department

    August 11, 2025 - 2 Comments

Current Issue

ACEP Now: November 2025

Download PDF

Read More

One Response to “Artificial Intelligence in the ED: Ethical Issues”

  1. January 8, 2024

    Todd B Taylor, MD, FACEP Reply

    Analysis presented in this article is germane to healthcare for the half of the world’s population that has access to it. For the other 4.5 billion people, AI may become their sole source for the full spectrum of healthcare services, including preventative, diagnostic, therapeutic, and behavioral heath healthcare. To withhold this emerging technology from those who might otherwise have none, seems narrow minded & unwarranted. As is a common consequence of American healthcare, forcing the “perfect be the enemy of the good” need not be perpetuated to all populations.
    To that end, one can imagine “proceduralists” (doing the necessary hands on work) aided by AI bringing healthcare to underserved & unserved individuals. Perhaps sooner than later, healthcare kiosks will have the ability to perform even sophisticated diagnostics & then deliver therapy (e.g. medication) even without the benefit of a human practitioner. Certainly some will suffer from incorrect diagnosis or prescribed therapy. But, that also happens on a regular basis in all sorts of healthcare settings today.
    Technology continues to eliminate entire swaths of the services industry. Trucking & transportation will soon no longer require a human. AI will also give you that perfect haircut. Traditional grocery stores will soon be replaced by machine picked items, delivered to your door by autonomous vehicles in less than an hour.
    As usual, healthcare will lag behind other industries, but technology will slowly chip away at services where AI advancements provide superior results. Diagnostic radiology & pathology are ripe for the picking.
    Emergency Medicine may be further down that list, but automated triage & other parts of the ED process will make what we do now look like the typewriter . . . “type-what” said “Gen Alpha”. And, who knows what Gen Beta (2025-2039) will have never heard of. As a “Boomer” myself I’ll probably be dead by then, but hopefully not a victim of MedAI.

Leave a Reply Cancel Reply

Your email address will not be published. Required fields are marked *


*
*


Wiley
  • Home
  • About Us
  • Contact Us
  • Privacy
  • Terms of Use
  • Advertise
  • Cookie Preferences
Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. ISSN 2333-2603