The Most Important Question in Medicine

“What else could this be?”

Today I am going to tell you the single best question you can ask any doctor – the one that has saved my butt countless times throughout my career – the one that every attending physician should be asking every intern and resident when they present a new case. That question: “What else could this be”?

I know, I know – when you hear hoofbeats think horses, not zebras. I get it. But sometimes we get so good at our jobs, so good at recognizing horses, that we stop asking ourselves about zebras at all. You see this in a phenomenon known as “anchoring bias” – where physicians, when presented with a diagnosis, tend to latch on to that diagnosis – paying attention to data that supports it, ignoring data that points in other directions.

That special question “what else could this be” breaks through that barrier.

It forces you, the medical team, everyone, to go through the exercise of real, old-fashioned differential diagnosis. And I promise, if you do this enough, at some point it will save someone’s life.

Though the concept of anchoring bias in medicine is broadly understood, it hasn’t been broadly studied, until now with this study appearing in JAMA Internal Medicine.

Here’s the setup.

The authors hypothesized that there would be substantial anchoring bias when patients with congestive heart failure (CHF) presented to the emergency department with shortness of breath if the triage “visit reason” section mentioned CHF. We’re talking about the subtle difference between this:

And this:

People with CHF can be short of breath for lots of reasons – CHF exacerbation comes immediately to mind and it should. But there are obviously lots of answers to that “what else could this be” question. Pneumonia, pneumothorax, heart attack, COPD, and of course pulmonary embolism.

The authors leveraged the nationwide VA database – allowing them to examine data from over 100,000 patients presenting to various VA EDs with shortness of breath. They then looked for particular tests – you can see them here – that would suggest that the doctor was thinking about pulmonary embolism. The question, then – would the mentioning of CHF in that little “visit reason” section influence the likelihood of testing for PE?

I know what you’re thinking – not everyone who is short of breath needs an evaluation for PE – and the authors did a nice job accounting for a variety of factors that might predict a PE workup: malignancy, recent surgery, elevated heart rate, low o2 sat, etc. And of course some of those same factors might  predict whether that triage nurse will write “CHF” in the visit reason section. All of these things need to be accounted for statistically, and were, but – the unofficial Impact Factor motto reminds us “There are always more confounders”.

But let’s dig into the results.  I’m going to give you the raw numbers first.  There were 4392 people with CHF whose visit reason section, in addition to noting shortness of breath, explicitly mentioned CHF. Of those, 360 had PE testing and two had a PE diagnosed during that ED visit. So that’s around an 8% testing rate and a 0.5% hit rate for testing.  But 43 people, presumably not tested in the ED, had a PE diagnosed within the next 30 days. Assuming those Pes were present at the ED visit, that means the ED missed 95% of the PEs in the group with that “CHF” label attached to them.

Let’s do the same thing for those whose visit reason just said “shortness of breath”.

Of the 103, 627 people in that category, 13,886 were tested for PE. 231 of those tested positive. So that is an overall testing rate of around 13%, and a hit rate of 1.7%.  1,081 of these people had a PE diagnosed within 30 days. Assuming those PEs were actually present at the ED visit, the docs missed 79% of them.

One other thing to notice from this data – the overall PE rate (diagnosed by 30 days) was basically the same in both groups.  That “CHF” label does not really flag a group at lower risk of PE.

Yes there are a LOT of assumptions here – including that all PEs that were actually there in the ED got caught within 30 days, but the numbers do paint a picture. In this unadjusted analysis, it seems that the “CHF” label leads to less testing and more missed PEs. Classic anchoring bias.

The adjusted analysis, accounting for all those PE risk factors really didn’t change these results.  You get nearly the same numbers, and thus nearly the same conclusions. 

Now, the main missing piece of this puzzle is in the mind of the clinician. We don’t know whether they didn’t consider PE or whether they considered PE but thought it unlikely. And in the end, its clear that the vast majority of people in this study did NOT have PE (though I suspect not all had a simple CHF exacerbation).  But this type of analysis is useful not only for the empiric evidence of the clinical impact of anchoring bias, but because of the fact that it reminds us all to ask that all important question. What else could this be?

A version of this commentary first appeared on Medscape.com