Spray this up your nose and you'll take more risks in social situations. No - it's not that.

SH_example_hunters-V-1-5001.jpg

For the video version of this post, click here. You and a stranger are sitting, unable to see each other, in small cubicles, and 200 dollars is at stake. Its called the stag hunt game. You each can choose to hunt stag or rabbit. Hunting a stag requires two people, but gives you that big $200 payoff. Choosing rabbit gets you $90 if you hunt alone or $160 if your partner chooses rabbit too. Get it? The risky choice requires cooperation.

That a single intranasal shot of vasopressin could affect this decision seems crazy, but nevertheless that's what researchers publishing in the Proceedings of the National Academy of Sciences found.

Now, I’m a kidney doctor. To me, vasopressin is the hormone that makes you concentrate your urine. But neuroscientists have found that vasopressin exerts a diverse set of effects in the brain – stimulating social affiliation, aggression, monogamy, and among men paternal behaviors.

The experiment took 59 healthy men and randomized them to receive 40 units of vasopressin or placebo intranasally. They then played the stag hunt game. Those who got the vasopressin were significantly more likely to go all in – choose the stag option, then those who got placebo.

Vasopressin's effects on the stag hunt game

The elegant part of the experiment was the way the researchers tried to pin down exactly why this was happening.  It wasn't just that vasopressin made you more tolerant of risk. They proved this by having the men choose between a high-risk high-reward and low-risk low-reward lottery game. Vasopressin had no effect.  It wasn't that vasopressin made you feel euphoric, or wakeful, or calm – self-reported measures of those factors didn't change.

Vasopressin didn't make you more trusting. When asked whether they thought their silent partner would choose "stag" over "rabbit", the presence of vasopressin didn't change the answer at all. No, the perception of risk didn't really change, just the willingness to participate in this very specific type of risky behavior which the authors refer to as "risky cooperative behavior".

Risky cooperative behaviors are basically anything that you do that requires you to trust that other people will also do for mutual benefit. In short – vasopressin may be the hormone that gave rise to modern society.

How does it work? Well, a simultaneous fMRI study demonstrated decreased activity in the dorsolateral prefrontal cortex among those who got vasopressin. This part of the brain has roles in risk-inhibition, high-level planning, and behavioral inhibition, so vasopressin downregulating this territory makes some sense when you look at the outcome.

But the truth is that an understanding of myriad neuro-electro-chemico-hormonal influences into choosing whether to hunt stag or rabbit is beyond the scope of this study. Still, for a believer in free-will such as myself, studies like this are always a stark reminder that it isn't necessarily clear who is in the driver's seat when we make those risky decisions.

Marijuana use and brain function in middle age

weeee.jpg

For the video version of this post, click here. The public attitude towards marijuana is changing. Though some continue to view the agent as a dangerous gateway to harder drugs like cocaine and heroin, increasing use of the drug for medical purposes, and outright legalization in a few states will increase the number of recreational pot users. Its high time we had some solid data on the long-term effects of pot smoking, and a piece of the puzzle was published today in JAMA internal medicine.

Researchers leveraged an existing study (which was designed to examine risk factors for cardiac disease in young people) to determine if cumulative exposure to marijuana was associated with impaired cognitive function after 25 years. Note that I said "impaired cognitive function" and not "cognitive decline". The study didn't really assess the change, within an individual, over the 25-year period. It looked to see if smokers of the ganj had lower cognition scores than non-smokers.

That minor point aside, some signal was detected. After 25 years of followup, individuals with higher cumulative use had lower scores on a verbal memory test, a processing speed test, and a test of executive function.

But wait – those numbers are unadjusted. People with longer exposure time to weed were fairly different from non-users. They were less likely to have a college education, more likely to smoke cigarettes, and, importantly, much more likely to have puffed the magic dragon in the past 30 days.

Accounting for these factors, and removing from the study anyone with a recent exposure to the reefer showed that longer cumulative exposure was only associated with differences in the verbal learning test. Processing speed and executive function were unaffected.

Now, the authors make the point that there was a dose-dependent effect with "no evidence of non-linearity". What that is code for is that there isn't a "threshold effect". According to their model, any pot would lead to lower verbal scores. Take a look at this graph:

Verbal memory scores based on cumulative pot exposure

What you see is a flexible model looking at marijuana-years (by the way, one year means smoking one doobie a day for 365 days). The authors' point is that there isn't a kink in this line – the relationship is pretty linear. But look at the confidence intervals. The upper bound doesn't actually cross zero until five years. In short, the absence of an obvious threshold doesn't mean that no threshold exists. It is likely that the study was simply underpowered to detect threshold effects.

The most important limitation, though, was that the authors didn't account for age-of-use on the cognitive outcomes. With emerging evidence that pot-use at younger ages may have worse effects on still-developing brains, this was a critical factor to look at. Five years of pot exposure may be much different in a 25-year old than in an 18-year old. This data was available – I'm not sure why the interaction wasn't evaluated.

In the final analysis, I think we can confirm what common sense has told us for a long time. Pot certainly isn't magical. It is a drug.  It's just not that bad a drug. For the time being, the data we have to work with is still half-baked.

Chantix goes to bat against the nicotine patch for quitting smoking. The winner? No one.

stop-smoking2-1.jpg

For the video version of this post, click here. Quitting smoking is really hard. It’s frustrating for smokers and for their doctors. And I need to come clean and admit that when varenicline (Chantix) came out I was excited to have one more weapon in my anti-smoking armamentarium. After all, the gums, lozenges, and patches didn’t seem to work very well, but here was this new drug – a pill – that initially boasted quit rates as high as 40%. Compare that to 8% with placebos.

Even compared to the nicotine patch, varenicline seemed better, with one study showing quit rates of 26% versus 20% at one year. Of course, patients had some interesting side-effects, but smoking must be worse than vivid nightmares, right?

Recent studies have suggested that combination nicotine replacement therapy, with a patch to give some basal nicotine and lozenges to curb cravings, might be the better strategy.

Now a study appearing in the Journal of the American Medical Association finally pits the drug against a fair competitor. Researchers randomized around 1000 smokers to treatment with 12 weeks of a nicotine patch alone, a patch plus lozenges, or varenicline. The big question was what percent of those would stay quit at 14 weeks after all interventions stopped.

And I’ll skip right to the punchline. Patch: 23%, Varenicline 24%, patch plus lozenge 27%. There were no statistically significant differences between any of these numbers. Out at 1 year? 20% stayed quit across the board. The winner appears to be… nobody.

To me, more interesting than the intervention results was the analysis of factors that would predispose to quitting. Some were no surprise – if someone else was smoking in the home, your chance of quitting was 22% instead of 27% in a smoke-free home.  But there was a pretty large discrepancy in quit rates based on whether or not you smoked menthols. 30% of standard cigarette smokers stayed quit, compared to just 19% of minty cigarette smokers.

Now, we may be tempted to tell our patients – "use whatever you like". But maybe the guidepost should be their tolerance of adverse events – as these were significantly different between the treatment strategies.  Hate itching, hives, and hiccups? Avoid the patch. Hate nausea, vomiting, and vivid dreams? Stay away from varenicline.

Actually, you know what's even easier? Stay away from cigarettes in the first place.

The US spends an appropriate amount on end of life care, if you massage the numbers a bit.

healthcare-investing-2016-450x372.jpg

For the video version of this post, click here. I think it's fair to say that there is a certain narrative regarding costs of health care in the United States. It goes like this:  "The US spends more on healthcare than any other nation, and gets less for it".

Is that really true?

Moreover, how do we even compare costs between nations? Well, given that around 25% of Medicare expenditures are accrued in the last year of life, researchers from the University of Pennsylvania examined how 7 different countries – all large, western democracies, including the US, treat individuals who died with cancer. The research appears in the Journal of the American Medical Association. Using national registries in each of the countries, Zeke Emanuel and colleagues were able to look at questions like what percentage of individuals died in the hospital and, importantly, how much money did each country spend on them.

These types of studies can be difficult to interpret, so I'll give you the party line first, and then some criticisms. First off, the good news, the US had lower rates of death in the hospital than any of the 6 other countries at 22%. Compare that to 52% in Canada.  That 22% figure is WAY down from rates in the 1970's where more than 70% of individuals with cancer in the US died in the hospital.

What about costs? Well, the standard narrative didn't hold up that well. In the last 6 months of life, the average American with cancer accrued around $27,000 worth of hospital costs. That's a lot more than those in The Netherlands ($13,000), but pretty similar to those in Canada and Germany.

I wouldn't be surprised if we see certain press outlets, or, perish the thought, politicians crowing about how American health care costs seem pretty manageable.  But here are some things to consider.  First, this study only examined cancer patients.  What's more, they only examined cancer patients who died. This says nothing about the myriad other costs our highly-medicalized society accrues on the day to day. Second, the study looked only at inpatient hospital costs. Americans spend less time in the hospital at the end of life thanks to a fairly robust nursing facility and hospice system. None of those costs were included.  Third, in the US physician fees are billed separately from hospital fees. Not so in the other six countries, and physician fees were NOT included in the US calculus.

Finally, a bit of a technical issue. How do you convert from, say, Euros to dollars in a study like this? The intuitive answer would be to use some average exchange rate over the time period studied. The authors actually used the health-specific purchasing power parity conversion rate. That's a mouthful, but basically it's a number that reflects the relative costs of purchasing a market-basket of health related goods in each country and adjusts for that.  In other words, countries where healthcare is cheaper (relative to the true exchange rate) would have their end-of-life costs adjusted upwards, making them look more expensive. I suspect this could move the final numbers by as much as 20% in either direction.

So there you go. We're doing OK here in the US, at least when it comes to caring for patients with cancer. But remember that complacency can be costly.

 

Being a woman versus being womanly: the implications after heart attack

Screenshot-2014-04-19-at-5.08.56-PM.png

For the video version of this post, click here. There are two elements you can expect to see in almost any study: the first is some effect size - a measure of association between an exposure and outcome. The second is a subgroup analysis - a report of how that effect size differs among different groups. Sex is an extremely common subgroup to analyze - lots of things differ between men and women. But a really unique study appearing in the Journal of the American College of Cardiology suggests that sex might not matter when it comes to coronary disease. What really matters is gender.

The study, with cumbersome acronym GENESIS-PRAXY, examined 273 women and 636 men of age less than 55 who were hospitalized with Acute Coronary Syndrome (ACS). Sex was based on self-report, and was binary (man or woman). But gender isn’t sex. Gender is a social construct that represents self-identity, societal roles, expectations and personality traits, and can be a continuum - think masculine and feminine.

The authors created a questionnaire that attempted to assign a value to gender. Basically, questions like - “how much of the child-rearing do you perform” or “are you the primary breadwinner for your household” - in other words these are based on traditional gender norms - but that’s as good a place to start as any. A score of 100 on the gender scale was “all feminine”, and a score of 0 “all masculine”.  Most of the males in the study clustered on the masculine end of the spectrum, while the females were more diverse across the gender continuum.

What was striking is that the primary outcome - recurrence of acute coronary syndrome within a year, was the same regardless of sex - 3% in men and women.  But a greater degree of “femininity” was significantly associated with a higher recurrence rate. Feminine people (be they male or female) had around a 5% recurrence rate compared to 2% of masculine people. This was true even after adjustment for sex, so we’re not simply looking at sex in a different way - gender is its own beast.

What does it all mean?  Well, it shows us that our binary classification of sex may be too limited in the biomedical field. Of course, there will always be hard and quantifiable physiologic differences between men and women. But what is so cool is that it’s the more difficult to quantify gender-related differences that may matter most when it comes to health and disease.

Of course, this conclusion is way too big to be supported by one small study with a 3% event rate. But given the surprising and really interesting nature of the results, I’m sure we’ll have many more studies of this sort following close behind. 

 

The diagnosis: Cancer. Should you blame your genes?

ge.jpg

For the video version of this post, click here. The prevailing wisdom about almost all types of cancer is that the disease occurs due to a combination of genetic susceptibility and environmental exposures. For different types of cancers, the relative weight of each of these components may differ. But teasing out how much contribution to cancer incidence can be attributed to genetics versus environment is tricky. Unless, that is, you have access to a register of over 100,000 pairs of twins.

In an article appearing in the Journal of the American Medical Association, researchers from four Nordic countries combined national twin registries to create a very detailed database of cancer incidence. The idea here is that identical twins share 100% of the genetic risk factors for cancer (whatever those may be), while fraternal twins share only 50%. This knowledge in hand, you can deconstruct just how much genetics is to blame for cancer.

First some numbers.

32% of the cohort would develop at least one cancer in their lifetime - a number which pretty closely matches what we see in the US.

Now, if your fraternal twin developed cancer, your lifetime risk bumped from 32% to 37%. If your identical twin developed cancer, that lifetime risk went from 32% to 46%. Clearly, genetics are at play here. But how much, exactly? Well, overall, the researchers estimate that about 33% of the variance in cancer incidence is due to genetic factors, with 0% due to shared environmental factors.

Let's parse that a bit though.

First, the researchers are NOT saying that the environment has nothing to do with cancer. They are saying that shared environmental factors, those things that two siblings would experience together, don't account for much risk. Once you leave the nest, in other words, the environment can still play a role. In fact, just doing the math suggests that around 66% of the variance in cancer incidence is due to environmental factors – just factors that don't happen to be shared by two siblings in their youth.

But as I mentioned, these contributions vary by type of cancer. For lung cancer, the shared environmental exposures accounted for more of the variance than genetics – probably because twins tend to share smoking habits even at a young age.

The important thing about this study is to realize that the genetic factor percentage puts a cap on what we can hope to learn from genetic studies of cancer. In other words, even if we perfectly sequenced everyone's genome, we'd only explain a third of the reasons why people get cancer. The smart money remains on evaluating environmental exposures, with the exception of some types of cancer that appeared to have very high genetic risks such as leukemia.

I'd be remiss if I didn't mention that this study was done in four Nordic countries and so the results probably don't give a complete picture of the risks faced in a more multi-ethnic society. In addition, the study can't answer the intriguing question of whether certain environmental exposures interact with certain genes to promote cancer. For now, we simply know that some of your destiny lies in your genes, but more of it in your actions.

Antidepressants, pregnancy, and autism: the real story

pregnant-depressed-women1.jpg

For the video version of this post, click here.

If you're a researcher trying to grab some headlines, pick any two of the following concepts and do a study that links them: depression, autism, pregnancy, Mediterranean diet, coffee-drinking, or vaccines.  While I have yet to see a study tying all of the big 6 together, waves were made when a study appearing in JAMA pediatrics linked antidepressant use during pregnancy to autism in children.

To say the study, which trumpets an 87% increased risk of autism associated with antidepressant use, made a splash would be an understatement:

The Huffington post wrote:

The Daily telegraph, rounding up, said:

Newsweek:

But if you're like me you want the details. And trust me, those details do not make a compelling case to go flushing all your fluoxetine if you catch my drift.

Researchers used administrative data from Quebec, Canada to identify around 145,000 Singleton births between 1998 and 2009. In around 3% of the births, the moms had been taking anti-depressants during at least a bit of the pregnancy. Of those kids, just over 1000 would be diagnosed with autism spectrum disorder in the first 6 years of life. But if you break it down by whether or not their mothers took antidepressants, you find that the rate of diagnosis was 1% in the antidepressant group compared to 0.7% in the non-antidepressant group. This unadjusted difference was just under the threshold of statistical significance by my calculation, at a p-value of 0.04.

These numbers aren't particularly overwhelming.  How do the researchers get to that 87% increased risk? Well, they focus on those kids who were only exposed in the second and third trimester, where the rate of autism climbs up to 1.2%.  It's not clear to me that this analysis was pre-specified. In fact, a prior study found that the risk of autism increases only when antidepressants are taken in the first trimester:

And I should point out that, again by my math, the 1.2% rate seen in those exposed during the 2nd and 3rd trimesters is not statistically different from the 1% rate seen in kids exposed in the first trimester. So focusing on the 2nd and 3rd trimester feels a bit like cherry picking.

And, as others have pointed out, that 87% is a relative increase in risk. The absolute change in risk remains quite small. If we believe the relationship as advertised, you'd need to treat about 200 women with antidepressants before you saw one extra case of autism.

But I'm not sure we should believe the relationship as advertised. Multiple factors may lead to antidepressant use and an increased risk of autism. Genetic factors, for example, were not controlled for, and some studies suggest that genes involved in depression may also be associated with autism. Other factors that weren't controlled for: smoking, BMI, paternal age, access to doctors. That last one is a biggie, in fact. Women who are taking any chronic medication likely have more interaction with the health care system. It seems fairly clear that your chances of getting an autism diagnosis increase with the more doctors you see. In fact, in a subanalysis which only looked at autism diagnoses that were confirmed by a neuropsychologist, the association with antidepressant use was no longer significant.

But there's a bigger issue, folks – when you take care of a pregnant woman, you have two patients. Trumpeting an 87% increased risk of autism based on un-compelling data will lead women to stop taking their antidepressants during pregnancy. And that may mean these women don't take as good care of themselves or their baby. In other words, more harm than good.

Could antidepressants increase the risk of autism? It's not impossible. But this study doesn't show us that. And because of the highly charged subject matter, responsible scientists, journalists, and physicians should be very clear. Women taking anti-depressants during pregnancy, do not stop until, at the very least, you have had a long discussion about the risks with your doctor.

 

 

 

Miserable? Happy? You'll live just as long either way.

Deborah_Moggach.jpg

For the video version of this post, click here. We've been doing these 150 second analyses for about 6 months now, so I feel I can ask you this:  Are you happy? Really happy?

Well it turns out it doesn't matter.

Plenty of observational data has suggested that higher levels of happiness are associated with greater longevity. This feels right, in some sense, but this data doesn't come from randomized trials. I'm not really sure how you'd randomize someone to be happier anyway – maybe something with puppies.

You've been randomized to happiness!

You've been randomized to happiness!

The issue is that sicker people probably don't feel as happy, so unless you account for that, how can you really say that happiness leads to longer life?

Researchers, writing in The Lancet, attempted to tackle this issue by going big. Really big. They examined around 700,000 women who participated in the Million Women Study in the United Kingdom.  These women, who were all aged 50-69, were asked simply how often they felt happy: never, rarely, sometimes, usually, or most of the time. They were then followed for around 15 years to examine cause-specific mortality.

Happier women were generally older, got more exercise, and avoided smoking.  They were also more likely to be Scottish and tended to drink more alcohol, so keep that in mind the next time you visit Loch Lomond.

As you might expect, lower levels of self-reported happiness were associated with higher mortality. Women who were happy most of the time had a mortality rate of around 4% in follow-up, compared to 5% among women who were generally unhappy.

This difference disappeared, though, when the authors adjusted for self-rated health. The conclusion? Happiness doesn’t matter.

But here's the thing: self-rated health is subjective, just like happiness is. When the researchers adjusted for objective health issues: depression, anxiety, hypertension, diabetes, being unhappy still led to an increased risk of death.

Let's also remember that being unhappy may lead to certain unhealthy behaviors – like smoking cigarettes. Adjusting for factors that lie along the causal pathway from exposure to outcome is an epidemiologic no-no. Finally, it strikes me as odd that we'd consider categorizing something as ineffable as happiness with a single survey question.

What I take from this study is the following:  Feeling unhappy is a real risk factor for mortality. So is feeling sick.  But I'm not ready to conclude that happiness is just a bystander, exerting no real effect on outcomes. We won't know for sure without that puppy-based clinical trial. But until then I'll leave you with the words of a wise woman who died before her time: Such is the force of happiness, the least - can lift a ton, assisted by its stimulus.

Med Students: The best 4.6 minutes in your day.

med-student.jpg

For the video version of this post, click here. Whenever I have a new med student on a rotation with me, I tell them the same thing.  Your med student rotations will be the worst experience in your medical training. OK, so I might not be the best preceptor.

But I mean it. No one on the team is as scrutinized as the med student. No one gets asked as many medical trivia questions.  The interns and residents, overwhelmed with the workload, get a pass.  But the med students, following 1 to 2 patients, man, they better know every last detail. And of course, they’re being graded on this performance, and that grade will dictate whether they become a highly paid orthopedic surgeon, or, I don’t know, a nephrologist with a blog.

But the worst part, I think, is their feeling of impotence.  They know what they want to do for the patient, but they can’t really do it.  If they write an order, it has to be cosigned.  If they write a note, it has to be reviewed.  This is, of course, a necessary part of training, but I feel for them. And I remember personally how liberated I felt when I could finally be an intern, and really care for patients on my own.

Med students routinely stress about the impact they are having on the team.  Are they helping things, or just slowing them down?  Well, a study appearing in the Journal of the American Medical Association gives us some hard numbers, at least in one big urban emergency room. At UPenn, my old stomping grounds, the ED is staffed by med students 3 out of 4 weeks a month, providing a nice little pseudo-randomized trial.  What differences occur in med student weeks compared to med-student-free weeks?

The researchers looked at about 15 years of data comprising almost 1 and a half million ER admissions. The patient characteristics were all pretty similar regardless of what week of the month it was – about 20% would end up getting admitted, 70% discharged, and 4% left without being seen.

The primary outcome the researchers were interested in was length of stay.  Turns out, having a med student around increased the length of stay by about 4 and a half minutes. With a median length of stay of 3 and a half hours, this statistically significant result likely doesn’t matter that much clinically.

Of course, length of stay is just one factor of importance in an ER stay, and the authors don’t tell us things like the number of times a zebra diagnosis like Whipple’s disease gets caught or missed.

In the end, this study should give med students some reassurance.  No, your presence on the wards doesn’t slow us down. Or, when it does, we’re happy to do it. Because, even a bad preceptor like me loves medical students for what they bring to us in those four extra minutes: enthusiasm, heart, and a reminder why we started doing this crazy job in the first place.

Well, I just took 150 seconds of your time.  I hope you can forgive me. And remember, the next time you see a med student, give them a kind word and a pat on the back. They’re going through a lot.

 

 

Should the worldwide c-section rate be 15%?

rbhc_50.jpg

For the video version of this post, click here.

Every pregnant woman should get a c-section.

Wait, no. No pregnant women should get c-sections.

Hmm, that doesn’t seem right either.

There are a lot of black and white things in medicine. Should we treat scurvy with vitamin C? Hard to argue against it. But there are many areas where hard and fast rules just don’t apply, and the c-section rate is a big one.

OK – we’ve decided that a 100% c-section rate is too high, and a 0% c-section rate is too low. What’s the right number? If your answer is “well, it’s too complicated to really assign a number”, you are in stark disagreement with the World Health Organization, which suggests that the appropriate rate is somewhere between 10 and 15% of live births.

Now a study appearing in the Journal of the American Medical Association suggests that that target might be too low. But I’m going to suggest that the whole idea of a target is misplaced anyway. But first, the details:

This is what’s called an ecological study. Instead of looking at the experience of individual patients, the study uses aggregate data, in this case, national c-section rates from all 194 WHO member states, and links them to that country’s maternal and neonatal mortality rates. The lowest c-section rate was 0.6% in South Sudan, likely attributable to a lack of appropriate facilities. The highest rate was 56%, in Brazil, for reasons that we can all speculate about offline.

Overall, the finding was that countries with higher c-section rates had lower maternal and infant mortality. C-section rates of less than 5% were associated with high rates of maternal mortality about 0.4% and infant mortality – about 1.5%. These figures got better as the rate of c-section went up until the 20% c-section mark, where things sort of flattened out. In other words, no evidence of higher mortality as the c-section rate increased further.

The trouble with an ecological study is that you only have national-level data to work with. The authors adjusted for what they could – GDP for instance, but we lack information on a critical factor – complications during delivery.

The reason the 15% c-section target set out by the WHO doesn’t make sense is that it is a target that averages two different populations – women who need c-sections due to medical necessity, and women who don’t. There is evidence in the US that unnecessary c-sections increase maternal morbidity. Simultaneously, there is ample evidence that lack of access to c-section when it is necessary is a major problem in the developing world. While there are few absolutes in medicine, one could argue that the c-section rate should be 100% for women who need them, and pretty low for those who don’t. Rather than arguing whether 15% or 20% is the “right” national c-section rate, let’s turn our efforts in getting c-sections to the right people, regardless of the nation they call home.