Logo

Log In Sign Up |  An official publication of: American College of Emergency Physicians
Navigation
  • Home
  • Multimedia
    • Podcasts
    • Videos
  • Clinical
    • Airway Managment
    • Case Reports
    • Critical Care
    • Guidelines
    • Imaging & Ultrasound
    • Pain & Palliative Care
    • Pediatrics
    • Resuscitation
    • Trauma & Injury
  • Resource Centers
    • mTBI Resource Center
  • Career
    • Practice Management
      • Benchmarking
      • Reimbursement & Coding
      • Care Team
      • Legal
      • Operations
      • Quality & Safety
    • Awards
    • Certification
    • Compensation
    • Early Career
    • Education
    • Leadership
    • Profiles
    • Retirement
    • Work-Life Balance
  • Columns
    • ACEP4U
    • Airway
    • Benchmarking
    • Brief19
    • By the Numbers
    • Coding Wizard
    • EM Cases
    • End of the Rainbow
    • Equity Equation
    • FACEPs in the Crowd
    • Forensic Facts
    • From the College
    • Images in EM
    • Kids Korner
    • Medicolegal Mind
    • Opinion
      • Break Room
      • New Spin
      • Pro-Con
    • Pearls From EM Literature
    • Policy Rx
    • Practice Changers
    • Problem Solvers
    • Residency Spotlight
    • Resident Voice
    • Skeptics’ Guide to Emergency Medicine
    • Sound Advice
    • Special OPs
    • Toxicology Q&A
    • WorldTravelERs
  • Resources
    • ACEP.org
    • ACEP Knowledge Quiz
    • Issue Archives
    • CME Now
    • Annual Scientific Assembly
      • ACEP14
      • ACEP15
      • ACEP16
      • ACEP17
      • ACEP18
      • ACEP19
    • Annals of Emergency Medicine
    • JACEP Open
    • Emergency Medicine Foundation
  • About
    • Our Mission
    • Medical Editor in Chief
    • Editorial Advisory Board
    • Awards
    • Authors
    • Article Submission
    • Contact Us
    • Advertise
    • Subscribe
    • Privacy Policy
    • Copyright Information

Doctors Beat Online Symptom Checkers in Diagnosis Contest

By Kathryn Doyle | on October 31, 2016 | 0 Comment
Latest News Uncategorized
  • Tweet
  • Click to email a link to a friend (Opens in new window) Email
Print-Friendly Version

(Reuters Health) – Doctors are much better than symptom-checker programs at reaching a correct diagnosis, though they are not perfect and might benefit from using algorithms to supplement their skills, a small study suggests.

You Might Also Like
  • AV Dissociation Is a Symptom, Not a Diagnosis And other tips for reading ECGs
  • Hospitals’ Antibiotic Use Stable Overall from 2006–2012
  • ED Testing for Pediatric Testicular Conditions Varies Widely

In a head-to-head comparison, doctors with access to the same information about medical history and symptoms as was put into a symptom checker got the diagnosis right 72 percent of the time, compared to 34 percent for the apps.

The 23 online symptom checkers, some accessed via websites and others available as apps, included those offered by Web MD and the Mayo Clinic in the U.S. and the Isabel Symptom Checker in the U.K.

“The current symptom checkers, I was not surprised do not outperform doctors,” said senior author Dr. Ateev Mehrotra of Harvard Medical School in Boston.

But in reality computers and human doctors may both be involved in a diagnosis, rather than pitted against each other, Dr. Mehrotra told Reuters Health.

The researchers used a web platform called Human Dx to distribute 45 clinical vignettes—sets of medical history and symptom information—to 234 physicians. Doctors could not do a physical examination on the hypothetical patient or run tests, they had only the information provided.

Fifteen vignettes described acute conditions, 15 were moderately serious and 15 required low-levels of care. Most described commonly diagnosed conditions, while 19 described uncommon conditions. Doctors submitted their answers as free text responses with potential diagnoses ranked in order of likelihood.

Compared to putting the same information into symptom checkers, physicians ranked the correct diagnosis first more often for every case.

Doctors also got it right more often for the more serious conditions and the more uncommon diagnoses, while computer algorithms were better at spotting less serious conditions and more common diagnoses, according to the results published October 10th in a research letter in JAMA Internal Medicine.

“In medical school, we are taught to consider broad differential diagnoses that include rare conditions, and to consider life-threatening diagnoses,” said Dr. Andrew M. Fine of Boston Children’s Hospital, who was not part of the new study. “National board exams also assess our abilities to recognize rare and ‘can’t miss’ diagnoses, so perhaps the clinicians have been conditioned to look for these diagnoses,” he said.

“Physicians do get it wrong 10 to 15 percent of the time, so maybe if computers were augmenting them the outcome would be better,” Dr. Mehrotra said.

“In a real-world setting, I could envision MD plus algorithm vs MD alone,” Dr. Fine told Reuters Health by email. “The algorithms will rely on a clinician to input physical exam findings in a real-world setting, and so the computer algorithm alone could not go head to head with a clinician.”

Pages: 1 2 | Single Page

Topics: ConditionsDiagnosisHuman DxPatient CarePractice ManagementSymptomsTechnology

Related

  • Florida Emergency Department Adds Medication-Dispensing Kiosk

    November 7, 2025 - 1 Comment
  • Q&A with ACEP President L. Anthony Cirillo

    November 5, 2025 - 0 Comment
  • Let Core Values Help Guide Patient Care

    November 5, 2025 - 0 Comment

Current Issue

ACEP Now: November 2025

Download PDF

Read More

No Responses to “Doctors Beat Online Symptom Checkers in Diagnosis Contest”

Leave a Reply Cancel Reply

Your email address will not be published. Required fields are marked *


*
*


Current Issue

ACEP Now: November 2025

Download PDF

Read More

Polls

Which topic would you like to see ACEP Now tackle?

View Results

Loading ... Loading ...
  • Polls Archive
Wiley
  • Home
  • About Us
  • Contact Us
  • Privacy
  • Terms of Use
  • Advertise
  • Cookie Preferences
Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. ISSN 2333-2603