So now that I’m an academic attending, how do I feel about the “change from below” paradigm that I suggested four years ago as an upstart medical student with a questionable grip on reality? Pretty great, actually. Am I really that excited to have my medical students try to teach me something during my hectic shifts? The answer is heck yes!
Explore This IssueACEP Now: Vol 35 – No 09 – September 2016
However, there is a caveat. (Of course, there must be a caveat. Otherwise, your protagonist learned nothing, and we wouldn’t want that, would we?)
The other day, a mere month or so after graduating residency, my personal arc with HINTS and FOAM came full circle in heroic fashion. Now, my own free podcast (FOAMcast) has become required listening for the incoming interns at the very residency program where I trained. My cohost, Lauren Westafer
(@LWestafer), recently recorded an episode in which we briefly discussed the HINTS exam. Neither of us, it turns out, rely on that exam the way we once might have. Why? Because a closer look at the literature reveals that the astoundingly good test characteristics of the HINTS exam that everyone likes to herald have never been shown to be true for emergency medicine providers. The studies we all cite when extolling the virtues of the HINTS exam were actually done by and on neurologists with particular expertise in the subfield of cerebellar strokes and vertigo. Therefore, while Lauren and I both use the HINTS exam as one part of our cerebellar testing in general, neither Lauren nor I feel comfortable relying on that exam alone. That’s just not our read of the literature.
However, that’s not what a newly minted intern who listens to our show heard recently. The current chief resident texted me after he gave the new interns a lecture about the HINTS exam. One intern raised his hand and said that FOAMcast had said that emergency medicine providers should “never” be doing a HINTS exam. Hey now, that’s not what we said—but it illustrated a great point: When junior clinicians read a paper, or equally when they hear a podcast or read a blog, they frequently gloss over important details including the setting and inclusion criteria for studies being discussed. Therefore, they often fail to consider properly whether the research of interest applies to the particular (real) patient in front of them.
A great example of this is the Pulmonary Embolism Rule-Out Criteria (PERC). Most people remember that the PERC rule applies to low pretest probability patients younger than 50 years old. However, can a 16-year-old PERC out? The answer—one most people don’t realize—is no. The dataset Kline et al used had 17-year-olds as the youngest patients. This is just one example among countless others.