Nearly every strategy addressing the diagnosis of pulmonary embolism (PE) revolves around the D-dimer test. These crosslink fragments resulting from the cleaving of fibrin mesh by plasmin have doomed many an unsuspecting soul to computed tomography pulmonary angiograms (CTPAs). The oft-lamented primary challenge associated with dependence upon D-dimer is its lack of specificity. The list of underlying conditions resulting in elevated levels of circulating D-dimer is extensive and includes:
Explore This IssueACEP Now: Vol 36 – No 08 – August 2017
- Increasing age
- African-American ethnicity
- Postoperative states
- Autoimmune and connective tissue disorders
- Smoking and illicit drug use
The first item on the list, increasing age, has been recently addressed by the reasonable approach of age-adjusting the D-dimer for every year over the age of 50.1 However, the other items on the list are simply part of the smorgasbord of collateral damage in our quixotic quest to identify every last PE.
What if there were a better way? What if we could make D-dimer great again? The answer is as unremarkable as it is obvious: Just like age adjustment, simply increase the commonly used dichotomous cutoff. Fewer D-dimer results above the testing threshold will result in fewer CTPAs and, by association, reduced harms and costs from unnecessary imaging. This is hardly a new concept—it’s more that the gradually enlarging body of evidence supporting such a strategy is novel. Clinician researchers from both sides of the Atlantic have called for a doubling of the D-dimer threshold over the last five years, with caveats.
The core concept underpinning this strategy stems from observations relating to the use of D-dimer as a continuous variable rather than a dichotomous cutoff. A recent study evaluated the use of interval likelihood ratios (the probability of a result in that interval for a disease-positive patient divided by the probability of a result in that same interval for a disease-negative patient) for D-dimer and found that even values in the interval between 750 and 1,000 ng/mL decreased the likelihood that a PE was present.2 Rather than using the typical 500 ng/mL basis as a cutoff to obviate the indication for further investigation, this implies there is more nuance to the testing threshold.
The most important cognitive consideration is to remember the goal of emergency department evaluation is not to rule out PE. Even the imprecisely named Pulmonary Embolism Rule-Out Criteria (PERC) do not strictly rule out PE. The various algorithms and decision instruments are built around the idea of reducing the posttest likelihood of PE to a point at which the harms of diagnosis and overdiagnosis outweigh the mortality benefit. Models for PERC were developed out of a test threshold of approximately 1.8 percent, meaning the acceptable miss rate for PE based on their assumptions is nearly 1 in 50. The difficulty with such models, however, is their dependence on estimates for prevention of morbidity and mortality. Unfortunately, the foundational evidence for these estimates can be traced back to a parallel group analysis dating to 1960 that compared 35 hospitalized patients with acute right heart failure and pulmonary infarction.3 It is a fantastic leap to generalize submassive PE to the segmental and subsegmental disease commonplace in modern practice, but these and other antiquated observational data are our source for estimates of harm from missed diagnoses.