A blood test to avoid a spinal tap in an infant? Yes please.

LP-Position.jpg

For the video version of this post, click here. Here’s a secret for any non-clinicians watching this. Docs are terrified of fevers in very young infants - not because they turn out so horribly - most are benign viral infections - but because fever in an infant of less than 4 weeks requires a spinal tap to rule out meningitis. It would be great to have a test that could reliably and quickly tell us if a baby’s fever was due to a bacterial infection that needs prompt and aggressive treatment, or a virus, that can be managed at home.

Enter procalcitonin. It’s pretty much undetectable in the blood of healthy kids, but goes up in the setting of inflammation.  Interestingly, it seems to go up much more in the setting of bacterial infection than viral infection.  Could we use procalcitonin level to triage infants with fevers? Could we avoid blood cultures and spinal taps?

An article appearing in JAMA pediatrics suggests that we could, but I’m not sure the evidence is so clear. Here are the basics:  researchers in France enrolled around 2000 babies from 7 - 91 days of life who presented to the emergency room with fever. The workup done was left to the discretion of the treating physician, but a blood sample for procalcitonin was taken and blindly measured later.

As you might expect, only about 10% of the babies had bacterial infections (which included urinary tract infections).  Bacteremia or meningitis were found in less than 1%. I want to focus on that last group, as UTIs can be pretty easily diagnosed by urine dipstick. Could procalcitonin rule out blood or cerebrospinal fluid infection? Well, using a cut-off of 0.3 ng/ml, babies with high procalcitonin levels were 31 times more likely to have bacterial meningitis or bacteremia than kids without those levels.  That sounds pretty good.

But to really evaluate this test, we have to look at the sensitivity, which was reported at 90%.  That means that you’ll capture 90% of those terrible infections if you use a 0.3ng/ml cutoff.  That’s good, but not great. And as a parent of a one month old myself, not a risk I’d be willing to take.  Lowering the threshold to 0.12ng/ml apparently would capture all the bad infections, but at that point, the false-positive rate would be really high.

The manuscript did demonstrate that procalcitonin is better than some alternatives like c-reactive protein and white blood cell count. But one thing they don’t report? Clinician’s suspicion.  They mention in the methods section that the treating physician classified each infant as well, minimally, moderately, or very ill.  How did those assessments perform?  We aren’t told.

It basically boils down to how many spinal taps we can avoid without missing any cases of meningitis. It may be reasonable to feel comfortable sending the baby home when the procalcitonin level is extremely low, but I suspect in those cases the physicians were pretty comfortable anyway. For me to push for the broad adoption of this test, I want to see that it tells us something physicians don’t already know - and this manuscript came up short on that front. But if that data does finally arrive, well, we’ll all be thankful.

 

Pre-exposure prophylaxis for HIV: panacea or Pandora's box?

Gay-couple-holding-hands-istock.jpg

For the video version of this post, click here. Pre-exposure prophylaxis for HIV - PrEP - basically entails taking a medication, typically a combination pill of tenofovir and emtricitabine to reduce the risk of acquiring HIV. PrEP is highly efficacious, with several randomized trials demonstrating a sharp reduction in transmission rates when PrEP is used in high-risk populations. In fact, among men who have sex with men, you need to provide PrEP to about 12 people to prevent one HIV infection.  That’s a very low number needed to treat for such a costly disease.

But efficacy isn’t the same as effectiveness. Efficacy is an ideal. Clinical trials follow their patients extremely closely, ensure they are taking their medication, and select their participants very carefully.  Effectiveness is the real-world performance of a drug, and, until now, we haven’t had great data to see how PrEP would work in practice.

And there have been concerns. PrEP should be used with a condom, it doesn’t replace a condom. It can’t be taken immediately prior to risky sexual behavior - it’s a daily medication. There is a low, but real, risk of kidney dysfunction with the drug. But the real controversy surrounds a small but vocal group of physicians and AIDS activists who suggest that PrEP will ruin so-called “condom culture”, opening the door to less safe sex, increased sexually transmitted infections, and even an increase in HIV transmission rates.

That’s why this article, appearing in JAMA Internal Medicine is so important.  The study followed 557 men who have sex with men and transgender women in three clinics across the US for about a year. All were HIV negative, but at increased risk of HIV infection, and all were provided PrEP free-of-charge.

Adherence was high - over 80% of individuals had therapeutic tenofovir levels when checked. Encouragingly, adherence was highest among those who engaged in the highest risk sexual behaviors. That’s right - our patients at risk understand they are at risk.

Over the course of the study, there were 2 new HIV infections, both in men with subtherapeutic levels of the drug. Based on baseline rates we would have expected around 11.  But that impressive result is not what really matters in this study.

Rates of receptive anal sex without a condom didn’t change at all over the course of the study.  Sexually transmitted infection rates didn’t change. In other words, the availability of a drug that can really prevent HIV transmission didn’t open some time portal to 1983. PrEP did not destroy condom culture, not that condom culture is all that pervasive. This is one of those situations where we have to respect the intelligence of our patients.  Educate them clearly on how this medication is to be used, and trust that they, as consenting adults, will use the drug the right way.

BMI is Dead. Long Live BMI!

howmuchweigh.jpg

For the video version of this post, click here. Body mass index. Since the term entered the medical consciousness in 1972, it has served admirably as a proxy measure for body fat, since body fat itself is sort of tough to measure. But it is demonstrably imperfect. Since it relates weight to height, it has no ability to distinguish between fat mass and muscle mass, leading to so called “obesity paradoxes” that are really no such thing. We need something better than BMI.

That something better, according to an article appearing in the Annals of Internal Medicine, may be the ratio of waist to hip circumference.

The study has some dramatic results.  Using data from the National Health and Nutrition Examination Survey, researchers examined around 15,000 participants. Each participant had a BMI and a waist-to-hip ratio.  It turned out that waist-to-hip ratio was a much better predictor of mortality and cardiovascular disease than BMI.

In fact, once you accounted for waist-to-hip ratio, BMI didn’t predict mortality at all.  What this suggests is that all those studies linking BMI to bad outcomes were secretly studies linking central obesity to bad outcomes (because BMI and central obesity are correlated).  But when you introduce a better measure of central obesity, the utility of BMI goes out the window. It’s a proxy measure without a home.

It turned out that, among men with normal BMIs, those with a high waist-to-hip ratio had an 87% higher risk of death. Women with normal BMIs and central obesity had a 50% higher risk of death. Perhaps more interesting, men with normal BMIs and central obesity had around twice the risk of death of men who were overweight or obese by BMI but who didn’t have central obesity. Women’s results went in the same direction, though the magnitude wasn’t as great.

So, do we give up on BMIs altogether?  Not necessarily.  Waist-to-hip ratio does seem to be the superior risk marker, but it’s not as easy to measure. These data were collected by individuals trained to do these measurements the same way, every time - it may not be possible to do that in the doctor’s office and get reliable results. Though maybe we could start employing tailors.

Also, remember that BMI still captures a lot of this data. The finding that individuals with normal BMI but high waist-to-hip ratio have increased mortality is compelling, but only 11% of men and 3% of women fit in this category. In other words, chances are if you have a normal BMI you’re fine. That said, it seems clear now that we need to find something better than BMI, something that helps distinguish between fat mass and muscle in a way that BMI can not. Whether a technological solution, such as bioimpedance analysis, or an anthropometric solution like the one in this study takes the baton, my intuition is that BMI now has a shelf-life.

 

Must love dogs? The link between canines and childhood asthma.

baby-baby-girl-cute-dog-Favim.com-598707.jpg

For the video version of this post, click here. Childhood asthma is a major concern, affecting around 8% of children in the US, and rates are on the rise. The “hygiene hypothesis” suggests that our increasingly clean lifestyles are altering the way our immune systems develop.  Without the constant, low-level exposure to microbes, we are shifted to a more allergic phenotype. Having recently adopted a puppy, I can personally tell you that dogs are a constant source of microbe exposure. But to date, data on early childhood exposure to dogs is mixed. Does the dander promote allergy and thus asthma, or do their loveable, bacteria-filled mouths offer some form of protection?

It’s a tough question to answer. You don’t want to rely on self-report of prior dog ownership - that can be inaccurate.  But where in the world can you find a registry of every dog owner?

Well, Sweden, it turns out.  In addition to having a national health care system and data registry, they also require registration of all pet dogs. Apparently, something like 80% of all dogs in the country are part of the registry, so finally we are in a position to determine if dog-ownership increases or reduces the risk of childhood asthma.

The study, appearing in JAMA Pediatrics, examined the roughly 1 million children born in Sweden between 2001 and 2010. In their seventh year of life, 4.2% of them had an asthma attack. Overall, around 8% of kids had a dog in the home during their first year of life.  So how did these percentages relate? Well, the kids with the dogs were about 8% less likely to develop asthma.

But wait a minute, there are a bunch of confounders at play here. What if parents with asthma avoid getting dogs and are more likely to have kids with asthma?  What if an older sibling with asthma prompts the family to get rid of the dog? What if people in lower socioeconomic strata are less likely to own a dog and more likely to develop asthma for other reasons?  The authors did a commendable job of controlling for these factors, actually, and if anything the protective effect of dog-ownership grew.

But one big issue remains.  While 80% dog registration is amazing (by American standards), that means that 20% of people have unregistered dogs. If those individuals are also more likely to develop asthma, it could blow the whole effect we are seeing.  

It turns out that dogs are not the most protective animal though.  Exposure to farm animals was far more protective - reducing the rates of asthma by about 50%.

The bottom line here is that if you’re debating getting a dog, don’t let fear of childhood asthma stop you. But if you remain concerned, perhaps consider adopting a family cow.

Will antibiotics make our kids fat? Nope.

pillbaby-e1445966047394.jpg

For the video version of this post, click here. The ubiquitous and often inappropriate use of antibiotics is a serious public health problem. So is obesity.  That these two factors could be linked is the conclusion suggested by a paper, appearing in the International Journal of Obesity, which leveraged the huge Geisinger health system database to examine the BMIs of children who had different exposures to antibiotics.

The researchers examined the records of just under 150,000 children ranging in age from 2 to 18 who had BMI measurements in the system. Now, the relationship between antibiotic use and BMI  is complex, so they tried to characterize a couple of metrics. They examined the immediate effect of the antibiotic - how much BMI increase could be expected after exposure to antibiotics in the past year, but they also measured the persistent effect of antibiotics - how much BMI increase would be associated with your lifetime exposure to antibiotics.

In both cases, antibiotics were associated with higher BMIs, but numbers matter. Let’s start with the basics. 59% of the children in the cohort had at least one antibiotic prescription. Shockingly (to me at least), of the kids that had contact with Geisinger in their first year of life, 49% had an antibiotic prescription in that year. We prescribe a LOT of antibiotics.

As to the effects, the short-term effect was relatively modest. Kids who got antibiotics in the past year had a BMI about 0.05 points higher than those who didn’t. The cumulative effect was even smaller - about 0.01 BMI points for one prior antibiotic at any point in life, but more courses led to more BMI gain. Those who had seven or more courses of antibiotics had a BMI about 0.1 points higher than those who never got antibiotics. All of these associations were statistically significant, but this was a huge study - these changes in BMI don’t strike me as clinically meaningful.

Moreover, these children were not randomized to get antibiotics. The researchers adjusted for age, sex, race, and medical assistance, but that’s it. Socioeconomic status could play a major role here, and medical assistance is not a close enough proxy for that. I also wonder about secondhand smoke exposure.

Putting it all together, the great obesity epidemic can’t be tied to antibiotic use. In fact, these small effect sizes make me less worried about the effects antibiotics have on our children's weight.

That said, this is one case where I’m glad there is some media hype around the study. While the headlines warning that antibiotics are making your children fat are completely overblown, perhaps the negative press will reduce the over-prescribing of antibiotics. And that’s good, not because it will cure the obesity epidemic, but because it will impact the emerging epidemic of microbial resistance.

 

Once a year? Once every other year? How often should we be doing Mammograms?

1280px-Woman_receives_mammogram_1.jpg

For the video version of this post, click here. The idea of screening mammography makes a lot of sense. Detect cancers early, treat them early, improve outcomes. In practice, though, screening mammography gets much more complicated. When should screening begin? When should it end? How frequent should it be? Each of these questions has its own fully-developed controversy.  Full disclosure: I am married to a breast cancer surgeon. Now a study, appearing in JAMA Oncology tries to crack the frequency question.

The study, by Diana Miglioretti and colleagues, used data from the Breast Cancer Surveillance Consortium, a national group that records data from regional radiology facilities. They identified all women in the dataset aged 40 - 85 who were diagnosed with a new breast cancer - around 15,000 women altogether.

Of those, around 12,000 had gotten annual mammograms, and 3000 or so had gotten biennial mammograms. The question was whether the tumors in the less-frequently-screened women would be bigger, or more advanced than those who were screened more frequently. In other words, does more frequent screening catch cancers when they are smaller?

The answer is a definitive kind of. Overall, there was no difference in tumor characteristics among those screened yearly or every other year. Among premenopausal women, though, annual screening did seem to find tumors with less advanced characteristics, an effect that was statistically significant, provided you don’t account for the multiple hypotheses being tested. But if the association is real, it’s interesting to note that there was no such effect when the cohort was stratified by age - so it seems that biological age, at least in terms of menopause, might be more relevant than chronological age here.

The study is shackled by several big limitations though.  #1 is that every woman in this study was diagnosed with cancer. We have no idea how many screenings were done to identify these 15,000 women with cancer and no way to tell if women undergoing every other year screening are treated differently. Perhaps mildly abnormal results get biopsied more often in the every other year group, since waiting to see how things look next year is not an option. The second big problem is that there is no link to any outcomes. Even if we buy that more frequent screening detects cancers earlier, we have no data to tell us whether that matters - ie, whether treatment is more effective at that point.  

In the end, the authors, like the guideline organizations, say that the frequency of screening should be decided between a patient and her doctor. But at some point, the decision to switch to biennial screening may be forced by insurance companies or medicare, and, at least according to this study, that might not be a bad thing.

 

The secret in the elephant genome

10014-silhouette-of-an-elephant-at-sunset-pv.jpg

For the video version of this post, click here. In 150 Seconds, I try to be relevant, to discuss studies that have an immediate impact on patient care. But sometimes a study comes along that is just so cool that I can’t help but tell you all about it. Let me start with a question I’m sure you’ve asked yourself hundreds of times:  “Why don’t all the elephants have cancer?”.

Cancer is, broadly, a stochastic phenomenon. For cancer to occur, a cell in the body has to suffer just the right kind of DNA damage and subsequently escape detection. The chance of this happening in any given cell are extraordinarily low, but, well, there are a lot of cells. Time is a factor here too, the longer a cell is around, the more chance it has to accumulate mutations that might cause cancer.  Elephants have way more cells than we do, by a factor of about 100.  And their life spans are quite similar. Which brings us back to our main question.  Why don’t all the elephants have cancer?

To figure out what’s going on, researchers from the world over collaborated on a study, appearing in the Journal of the American Medical Association, that investigated the cancer incidence in over 36 mammalian species of varying sizes and life spans from the striped grass mouse to the fennec fox, from the marmoset to the tiger, the capybara and, of course, the elephant.

Overall, the rate of cancer didn’t vary much with body size or lifespan. Tazmanian devils, prarie dogs, and cheetahs had fairly high rates of cancer, but there wasn’t a single moose with the disease. Elephants, the biggest and longest-lived of the lot, had a lifetime cancer incidence of around 4%.  You can compare that with the risk in humans which is something like 10 - 20%.

Genomic analysis then revealed the Elephant’s secret. Instead of two copies of the tumor-suppressor gene TP53, they had 20. Over their evolutionary history, 19 extra copies of the gene had crept in.  TP53 is a critical tumor suppressor.  Humans lacking just one copy of TP53 have over a 90% lifetime risk of cancer. What’s more, those TP53 genes appear to be more active in elephants.  When exposed to DNA-damaging radiation, elephant lymphocytes kill themselves at a much higher rate than human lymphocytes.

So the great mystery of the cancer-free elephant is solved, but a greater question remains.  Why do humans only have 2 copies of TP53? One answer may be that, up until modern times, we simply didn’t live long enough to put any selective pressure on genes that would prevent cancer. Also, as we don’t reproduce into our 80s (like elephants do), natural selection doesn’t get a chance to provide new protection. In any case, this isn’t a study that is likely to change practice in our nation’s hospitals and doctor’s offices, but it is certainly one I will never forget.  

Give beta-blockers pre-op. Wait... don't. Well, maybe.

Nancy_Wilson_and_Roger_Fisher_-_Heart_-_1978.jpg

For the video version of this post, click here. When I was training as a medical resident, we used to do all these consults for perioperative clearance, or, more politically, perioperative risk-stratification.  The idea is that, before elective surgery, we’d go in and take a detailed history and decide if the patient needed any additional workup before they went under the knife.  Not infrequently, we’d suggest perioperative beta-blockade.  The idea was that the stress of surgery put a strain on the heart, and the beta-blockers would prevent post-op cardiac complications.

The rationale for this behavior was bolstered by something called the DECREASE IV trial, which randomized pre-operative patients to get a beta-blocker or placebo, and showed that those who got the beta-blocker were significantly less likely to experience a cardiac event. But a subsequent trial, called POISE, found that those who got beta-blockers had an increased risk of all-cause mortality.  It also emerged that the DECREASE trial had serious ethical flaws, and that some of the data may have been fabricated.  

Still, the biologic rationale is there, so what do we do?  Well, a paper appearing in JAMA-internal medicine attempts to answer this question, albeit in a very roundabout way.  Let’s see if we can get there.

Danish researchers used that country’s impressive electronic health record database to identify 55,320 individuals who underwent non-cardiac surgery between 2005 and 2011. This number excluded anyone with any sort of heart disease, though all of the patients had hypertension.  In fact, they were all taking at least two anti-hypertensives, which makes this a rather unique cohort to begin with. It turned out that the 30-day mortality risk was 1.93% in those taking beta-blockers compared to 1.32% in those taking other anti-hypertensives. This relationship persisted even after adjustment for things like age, sex, surgical risk, and comorbidities. So… case-closed? We shouldn’t give people beta-blockers pre-operatively?

Well, not so fast. This study only looked at those who were already taking beta-blockers - it doesn’t really tell us anything about what happens if you start a beta-blocker pre-op. Moreover, people get beta-blockers for a reason. That reason often involves some practitioner deciding the patient is at risk of some type of cardiac event, meaning the deck might have been stacked against beta-blockade from the beginning.  Also, restricting the cohort to those on two anti-hypertensives really limits what we can apply to most patients.

The American College of Cardiology currently recommends keeping patients on beta-blockers perioperatively if they were already on beta-blockers. I’d be very surprised if this study leads to any sort of change of heart.  

Should we reduce the dose of nicotine in cigarettes?

smoking_kills_3-t2.jpg

For the video version of this post, click here. This is oversimplifying a bit, but cigarettes basically contain two things: tar, the products of combustion that give you cancer and heart disease, and nicotine, the drug that is the reason you smoke cigarettes. In 2009, the Tobacco Control Act empowered the FDA to reduce, but not eliminate, the nicotine in cigarettes if it would benefit the public health. They have yet to exercise this power. But a new randomized trial, appearing in the New England Journal of Medicine, gives us some insight into what we might expect should they take that path.

The nuts and bolts of the trial are as follows. Researchers, primarily at the University of Pittsburgh, randomized 839 individuals into one of 7 groups: your usual brand of cigarettes, a study cigarette containing the normal 16mg of nicotine, or 5 other study cigarettes containing reduced doses of nicotine down as low as 0.4mg, one fortieth of a standard dose.

The study participants, who were all smokers with no desire to quit, received free cigarettes and some money for their participation in the six-week study. The question is what would happen to those randomized to the lower-dose groups. Would they smoke more cigarettes to compensate? Would they just buy regular cigarettes down the street? Would they give up on smoking altogether?

The results were somewhat surprising. Those randomized to their usual brand actually smoked a bit more over the 6 week period, increasing their consumption on average from 15 to 20 cigarettes per day. Remember, the cigarettes were free. Those in the lowest dose group kept their intake at pretty much 15 cigarettes a day for the whole study. Urine nicotine metabolite levels were lower in the low-dose group, suggesting they werent cheating too much with store-bought cigarettes. Importantly, 34% of those in the low-dose group reported attempts at quitting, compared to just 17% in the regular strength group.

So, youre the FDA, do you exercise your power to lower the nicotine content of cigarettes? People kept smoking their usual amount, after all, getting all that tar, despite the lower nicotine content. Maybe it would be easier for them to quit, maybe new smokers wouldnt get addicted, but thats not what this study was designed to test.

In the end, separating tar and nicotine is a great idea, but maybe this study gets it backward. Instead of making cigarettes less appealing by reducing the nicotine content, perhaps we should find ways to deliver nicotine without all the tar? Such methods, it turns out, exist, and are increasingly popular if my hipster neighbor is any indication. Keeping people away from addictive substances is just really hard to do. Giving access to such substances in a safer way may seem like giving up, but perhaps it is simply giving deference to human nature.

If you have pneumonia, you may be better off in an ICU.

older-man-coughing.jpg

For the video version of this post, click here. For the cost of a single night in a typical American intensive care unit, about $7500, you could stay for 10 days at the all-inclusive Overwater hotel in Bora Bora. For this reason, many health economists have looked at ICUs as something of a necessary evil. They are a requirement for the advanced care the US health system can deliver, and at the same time the embodiment of a system that directs way too many resources to care occurring at the end of life.

The problem when you study the effect of ICU admission is called confounding by indication. Sicker patients are more likely to get admitted to the ICU, and more likely to die, so observational studies tend to make ICU care look bad.

Thats what makes this study, appearing in JAMA, of pneumonia admissions by researchers from the University of Michigan so interesting.

They wanted to know if ICU admission improved outcomes for Medicare patients with pneumonia. Instead of relying on a traditional multivariable adjustment approach, they used a technique called instrumental variable analysis.

An instrumental variable is one that is associated with your exposure of interest, in this case, ICU admission, but not associated with the outcome of interest (in this case, death at 30 days) except via that exposure. The idea, then, is that the instrumental variable acts like a randomizer to one or another treatment, allowing us to fairly compare them. In this analysis, the researchers chose distance from a hospital that used their ICU a lot as the instrument.

To believe the results, you have to buy that the closer you live to a hospital with high ICU utilization, the more likely you are to be admitted to an ICU. Not exactly a stretch, and indeed, the data shows that people with pneumonia who live within about 3 miles of such a hospital get admitted to the ICU 36% of the time compared to 23% of the time among those living further away. You also have to buy that distance to such a hospital is not associated with death - this is quite a bit trickier to prove.

If you trust the instrumental variable, what you find is that ICU admission was associated with less mortality than ward admission, 15% versus 21%, and less medicare costs by about $1300 dollars.

Now, the results of an instrumental variable analysis should be interpreted as applying to the marginal patient. In other words, the benefit the researchers saw applies to those that could reasonably be admitted to the ward or the ICU.

While I have some concerns over the quality of the instrument, I tend to believe these results. The authors, prudently, call for a randomized trial to evaluate the effects of ICU admission on older patients with pneumonia, but for now, I hope their voice sticks in your head when youre seeing a patient with pneumonia in the ER and thinking should I send them to the unit? If youre asking the question, the answer seems to be yes.