Rosacea linked to Parkinson disease: Is this remotely plausible?

parkinsonsdisease.jpg

The title of the manuscript is: 1

Titles like that remind me of the time I was in South Africa and ran into an evangelical Christian basketball team. I know what both of those things are, but I’d never thought of putting them together. Nevertheless that’s the study that appears in JAMA Neurology. So are we playing epidemiology roulette, or is this a real finding? Should your patients with rosacea be concerned?

For the video version of this post, click here.

First things first, this is a study out of Denmark, a country with a nationalized and central health care system. Researchers examined… well… everyone in the country over age 18 from 1997 to 2011, so over 5.5 million people. They identified around 68,000 individuals with rosacea based upon either administrative codes or having received two prescriptions of topical metronidazole. They identified individuals with Parkinson disease based, again, on administrative codes. A sensitivity analysis looked at all those people who got medications associated with Parkinson disease, like levodopa.

Overall, those with rosacea were twice as likely to subsequently receive a diagnosis of Parkinson disease. After adjustment for a fair amount of covariates, the risk was lessened but still significant. Of course, that’s a relative risk. In absolute terms we’re talking about rosacea increasing the risk of Parkinson Disease from 3.5 cases per 10,000 person-years to 7.5 cases per 10,000 person years.

So… why?  Well, the authors note that rosacea is associated with upregulation of a group of proteins called matrix metalloproteinases (MMPs) which have a role in tissue breakdown and repair. There is also a mouse model of Parkinson disease that shows upregulation of MMPs. A prior observational study of 70 patients with Parkinson disease showed a higher than expected rate of rosacea. But basically, that’s it. This is the first large, epidemiologic study to even examine the association between these two conditions.

Now I could mention that administrative codes are poor at capturing conditions like this, that people with rosacea may have more contact with the healthcare system and thus be more likely to receive a diagnosis of Parkinson disease, and that the study lacked a negative control condition – say Alzheimer’s dementia – to lend support to the biologic plausibility argument. But I don’t want to be cynical. This paper clearly represents the first foray into an area that probably warrants a deeper look.

The important thing to remember though, is that even if this is a real finding, the impact for your patients with rosacea is quite minimal. I took the liberty of making a proportional Venn diagram here which I think is illustrative:

Rosacea and Parkinson Disease

 

The big blue rectangle represents the total population of Denmark, and the circles represent the two disease conditions. There is some overlap between rosacea and Parkinson disease, but I think it should be very clear that most patients with rosacea needn’t worry. It is for this reason that I take issue with this statement:

Rosacea Parkinson Disease

 

Are IUDs dangerous for teenage girls?

iud.jpg

In the past 5 years both the American Academy of Pediatrics and the American College of Obstetrics and Gynecology have come out in support of Long-acting reversible contraceptives (LARCs) as highly effective options for adolescent girls. LARCs, which include IUDs and implants, are undoubtedly superior to oral contraceptives and condoms when it comes to preventing pregnancy. Of course IUDs don’t prevent sexually transmitted infections. The question, then, is whether use of IUDs will commensurately decrease condom usage, and, according to an article appearing in JAMA pediatrics, the answer is yes. But as usual, the devil is in the details.

For the video version of this post, click here.

The study used the well-established Youth Risk Behavior Survey. This is a biannual survey administered to high-schoolers across the US, asking all sorts of probing questions about sex, drugs, and rock and roll.

The researchers identified 2,288 sexually active girls, and asked what method of contraception they used during their last sexual encounter. That defined their “base” form of contraception, and only one answer was allowed. Around 40% reported using condoms, 22% oral contraceptives, and 30% using withdrawal or no contraception at all. Just 1.8% used IUDs, so we’re not talking about a huge population here.

That you could only give one answer to this question is really the study’s Achilles’ heel, as individuals who used an IUD AND condoms at their last sexual encounter, but answered condoms to this question would not be classified as IUD users. This could bias the results making IUD users appear less likely to also use condoms.

Which is what was seen. 37% of oral contraceptive users also used condoms at their last sexual encounter, compared to 16% of IUD users, a highly statistically significant difference. So do IUDs discourage condom usage?

What this boils down to is why teenagers use condoms. I think if you were to survey them, you’d find that they use them to prevent pregnancy, not to prevent STIs. Knowing that IUDs are incredibly effective at preventing pregnancy may make teenagers less likely to think about using condoms.

But the direction of causality here may be opposite to what is being suggested. It is possible that docs encourage girls to get IUDs precisely because they are not reliable when it comes to using condoms, and that might be a very good thing.

But to me, the take home from this study isn’t that IUD users have low condom-use rates, it’s that EVERYONE does. A 37% condom use rate among oral contraceptive users should be frightening. This study didn’t convince me that IUDs discourage condom use, but it sure convinced me that we need to make dual-protection socially normative.

How about this: “dual-protection: it’s what the cool kids do when the cool kids do it.”

Or if you can think of a better slogan, let me know in the comment section.

 

Handling Missing Data like a Politician

GTY_donald_trump_hillary_clinton_sk_150619_16x9_992-1.jpg

That certain individuals have referred to the current presidential election drama as reminiscent of a middle school student council election does a great disservice to student council elections. But politics is about power, and knowledge is power (or so I’m told) so perhaps that’s why this current crop of wannabe potentates is so obsessed with polls, surveys, and other data. And that’s the last time I’ll refer to the current political circus during this post. Because all politics is local, and, as a former middle-school class president myself, I have decided to focus this post on a middle-school presidential election.

This article originally appeared on MedPage Today and can be accessed here.

Full disclosure: This election does not exist. It is a figment of my imagination. Any semblance to real individuals, living or dead, is mostly coincidental.

*

The place is Boise, Idaho, and at the Pierre Perito school (a public school of some significant repute) two presidential candidates are duking it out for the votes of the Tweenagers. In one corner, Ronald Frump, a newcomer to this sort of thing who has garnered quite a following with blistering attacks on all his rivals, his promise to build a wall around the playground (to keep those rival West Boise kids out), and general swagger. Opposing him is Hilda Canton, who has been serving in the student council since Kindergarten, diligently working within the system to effect change, and only occasionally using her town-wide notoriety to land her plum speaking gigs at the local candy store.

It has been a heated race. Insults have been flung, jimmies have been rustled, and special interests (particularly the pizza-every-day lobby) have been shamelessly pandered to.

As the big day approaches, polling shows a tight race. While Ronald holds a strong lead in Ms. Jensen’s 5th grade homeroom, Hilda is nearly unimpeachable among women ages 13 to 13 and a half. As usual, the election will come down to turnout. Will the recent pink eye outbreak keep 6th-graders home on Election Day?

Well, as the campaigns ramp up to full-blown blustery capacity, the big day arrives. Voting proceeds from morning til dismissal (as you can only vote during your free period), and exit polls confirm what many suspected: it’s going to be close.

The votes are counted. Recounted. Recounted again. The victor? Hilda, by 241 to 220. Ah, the power of the people.

But there’s a problem. Every student who voted signed in. And there are 490 signatures. Pierre Perito Middle School is missing 29 votes.

*

You can imagine the scene. Shouts of fraud, corruption, conspiracy! The Frump team disavows the entire election, stating a totally new election must be held. The Canton team, of course, states that the results are acceptable… we can’t expect everything to add up perfectly. And besides, Frump would need to have 22 of those 29 votes be for him if he was going to win this election.  What are the chances?! If we redo the whole election every time we miss a few votes, well, the whole system will collapse. In the end, the people have spoken.  Haven’t they?

Before we delve into the statistics behind this, I want to ask you to think for a second about what you would do in this situation. Do you redo the election? How would you decide? What information would help you figure it out?

Well it turns out, that missingness is not so simple an idea. In fact, there are kinds of missingness.  Varieties of missingness. And missingness, and the way missingness is handled, can often tip the scales in biomedical research studies.

Missing Completely at Random (MCAR)

Missing completely at random (MCAR) means that the data that is missing is missing for purely random reasons, which is to say, no reason at all. Put more formally, the missingness is unrelated to any other covariate, measured or not. This is the type of missingness that most (but not all) statistical tests assume, the most difficult to verify, and the one, in my opinion, least relevant to biomedical research.

To take our middle school, an example of MCAR would be if someone took all the votes, shuffled them all up, and accidentally dropped 29 on the ground when transferring to the counting place.  Those votes that were dropped, the missing votes, are missing completely at random. There was no choice of what vote to drop, no group got selected to be dropped on the floor.

As you can imagine, under the MCAR assumption, Frump’s path to victory is an uphill climb.  Let’s do the math:

Looking at the votes we actually counted, 52.3% of the electorate voted for Hilda. Because the dropped votes were dropped at random, chances are about 52.3% of those would have been Hilda votes. Now, in reality, the chance nature of how votes were dropped could skew that a bit, but is it possible that it could have been skewed enough to give Frump the win?  Statistics to the rescue!

Frump needs 22 of the 29 votes, around 76%. The chances of getting 76% for Frump when the underlying base probability for Frump is 47.7% is given to us by the Binomial distribution.  Long story short, the chances are around 8 in 1000.  Is it possible that the dropped votes swung the election? Sure. But it’s not very likely.

This is, frequently, how clinical trials treat their patients who are lost to follow-up.  They, essentially, assume that their outcome would be the same as whatever the outcome is in people who weren’t lost to follow-up. That their missing data is, essentially, missing completely at random. I suggest to you that that assumption is not too valid.

This came to a head at an FDA panel recently where the agency was evaluating apixaban (Eliquis) a novel oral anticoagulant. The clinical trial under review had shown not only that the new drug carried with it a significantly lower rate of stroke than warfarin, but an improved overall mortality (p=0.047). But the margin for that mortality claim was quite thin – in fact, if one extra patient in the apixaban group died and one extra patient in the warfarin group survived, the difference would no longer be statistically significant.  Now, ordinarily, I’d say who cares? We picked a p-value of 0.05 to be significant, we have to abide by it.  But FDA reviewer Thomas Marciniak noted in his comments that data problems "destroy our confidence" that the drug reduces death.  Ahh – so now we have to believe that the death rates among those missing people were exactly the same as among non-missing people. I agree, that’s a stretch.  (The FDA, on the other hand, didn’t).

Missing completely at random is a very high bar to reach.  One notch lower is MCAR's little brother, missing at random.

Missing at Random

This is… an unfortunate term. But it’s how this is always described, so you should know it has a technical meaning. Missing at random differs from MCAR in that the missingness is related to a measured covariate.

Back to our election. Let’s say that, because colors help children learn (citation needed), ballots were printed on pink or blue paper, and (because this is Idaho, I guess) boys get blue ballots and girls get pink.  Now assume that some overzealous vice-principal trashes 29 BLUE ballots.

The ballots he chose to trash are random (at least with respect to the vote on them) but their missingness is entirely tied to color. Now what are Frump’s chances.

As you can probably guess, it has something to do with how boys vote. If boys vote overwhelmingly for Frump, his chances that these discarded chits will swing the election might be substantial. For example if 75% of boys voted for Frump, the chance that these missing ballots would trun the election is 56%. With a risk that high, you might be forced to do the whole election again. Conversely, if sex has nothing to do with who you vote for, well, then we’re back to the MCAR calculation and we can let the results stand.

Missing not at Random

This is the worst type of missingness.  It's the kind where the missingness can not be accounted for by a variable you have on hand. This would be a situation where someone deliberately removed 29 Frump votes from the pile. This can not be proved statistically (unless you find the votes), so it can be hard to prove.

In biomedical research, this may be the most common cause of missingness. We can almost never explain completely why certain individuals were lost to follow-up (so missing at random is out). We are left, then, debating whether they dropped out due to completely random reasons (lightning strikes, alien abductions, etc), or that they dropped out for reasons that may matter. In the latter, more realistic case, there is no statistical test that can fix the results.

So we are left either closing our eyes and pretending the data is MCAR, or (and I prefer this route) doing two sensitivity analyses where you assume that everyone who you lost track of experienced the outcome or that everyone you lost track of didn’t experience the outcome. If you get the same results either way, we can be pretty confident that the study conclusions are reliable.

Unfortunately, I rarely see these analyses performed. When I do, it pretty much always shows that the conclusions are robust to those sensitivity analyses. Should we be suspicious of that? I shall not judge.

So the next time you’re reading that article, play this game.  The authors get 1 point if they even mention missing data. They get 5 points if they try to analyze why data might be missing. They get 50 points if they do the aforementioned sensitivity analyses. And if those analyses lead them to conclude that their primary results may not be valid, they get 1 million points, and the official Methods Man "You're a researcher with integrity" prize.

In the meantime, get out and vote.

Pregnancy, Multiple Sclerosis, and Vitamin D: The Latest Hype

sun.jpg

A study appearing in JAMA neurology links better Vitamin D level in pregnant women to a lower risk of multiple sclerosis in their offspring. There are some really impressive features of this study, but there are some equally impressive logical leaps that seem to defy the force of epidemiologic gravity. Let's give  the study some sunlight.

For the video version of this post, click here.

The study was run out of Finland, which is a country that figured it might be a good idea to keep track of the health of its citizens. In fact, since 1983, nearly every pregnant woman in Finland has been registered, and a blood sample sent to a deep freezer in a national biobank. The researchers identified 193 individuals with MS, and went back into that biobank to measure their moms' vitamin D levels during pregnancy. They did the same thing with 326 controls who were matched on their date of birth, mother's age, and region of Finland.

This is from the first line of their discussion:

1

Wow. 90%. That sounds scary. And the news outlets seem to think it is scary too.  But that impressive result hides a lot of statistical skullduggery.

Here's the thing, Vitamin D level is what we call a continuous variable. Your level can be 5, 10, 17, 42, whatever – any number within a typical range. When you study a continuous variable, you have to make some decisions. Should you chop up the variable into categories that others have defined (like deficient, insufficient, normal), or should you chop it up into even-sized groups? Or should you not chop it up at all?

As a general rule, you have the most power to see an effect when you don't chop at all. Breaking a continuous variable into groups loses information.

When the Vitamin D level was treated as the continuous variable it is, there was no significant relationship between Vitamin D level in mom and MS in the child. When the researchers chopped it into 5 groups, no group showed a significantly higher risk of MS compared to the group with the highest level. Only when they chopped the data into 3 groups did they find that mom's who were vitamin D deficient had 1.9 times the risk of those that were insufficient. That's the 90% figure, but the confidence interval ran from 20% to 300%.

And did I mention there was no accounting for mothers BMI, smoking, activity level, genetic factors, sun exposure or income in any of these models? Despite that, the paper's conclusion states :

2

That statement should go right on the jump to conclusions mat.

jump-to-conclusions-mat

Look, I'm not hating on Vitamin D. I actually think it's good for you. But research that adds more to the hype and less to the knowledge is most definitely not.

 

Inducing labor at 39 weeks - is picking your kid's birthday worth it?

inducing-labor.jpg

Few aspects of modern medicine engender as much controversy as our labor and delivery practices. Rates of early induction of labor vary widely from country to country – even from hospital to hospital. And while some randomized trials have demonstrated that induction of labor prior to 40 weeks gestation might have favorable effects for infants with certain conditions like large-for-gestational age, we really don’t have much data on the effects of induction of labor during a normal pregnancy. But a study appearing in the New England Journal of Medicine attempts to shed light on that issue.

For the video version of this post, click here.

Run out of 39 hospitals in England, this study randomized 619 women, all age 35 or older on their first pregnancy, to labor induction at 39 weeks, or usual care.

Why do this? Well, for one thing, induction of labor prior to the official due date is pretty common. There is, perhaps, a quality-of-life argument to be made about having the ability to more or less choose when to deliver a baby. There is also some observational data that suggests that the sweet spot for delivery is around 38-39 weeks. Prior to that, complications associated with pre-term infants go up, and much beyond that and you start to see other birth complications.

Now, this study was clearly too small to detect differences in rare outcomes like neonatal or maternal mortality, but there has been some concern that induction of labor might increase the rate of c-section.

This study saw no such increase. The rate of c-section was 32% in the induction group and 33% in the usual care group – not statistically different. There were also no differences in rates of assisted vaginal delivery or NICU admissions, and every child in the study survived to hospital discharge. One fact caught my eye, though, and I think it gives us insight into the main limitation of this trial.

There was no significant difference in birth weight between the arms of the study. You’d think that the early induction arm would at least have slightly smaller babies. But in reality, the arms just weren’t that different in terms of any measured variables. Why? Well, there were women in the usual care arm who went into labor at 38 weeks. In fact, only 222 of the 305 women in the induction group got induction of labor prior to 40 weeks of gestation, as the protocol specified.

This bias, which the authors half-jokingly describe as “non-adherence”, is due to the fact that randomization into the study could occur at any time from 36-40 weeks. If you wanted to really answer the question that the authors pose, you’d randomize everyone at 39 weeks, and if they were put in the early induction arm, induce them at or near the time of randomization.

So we need to interpret this study not as saying that early induction is safe, but that a plan for early induction is safe. This is a subtle difference, for sure, but an important one if you are discussing inducing a woman who has already hit the 39 week mark. Still, in my book, a small victory for patient autonomy is a victory nonetheless.

 

Statins to prevent acute kidney injury after cardiac surgery

Can-kidney-stones-lead-to-heart-problems.jpg

Statins, is there anything they can’t do? These agents, granted blockbuster status more than 20 years ago, are potent weapons in our fight against cardiovascular disease. But beyond their cholesterol-lowering effects, we’ve seen studies where statins act as anti-inflammatories, improve immune function and even stave off dementia.  Could the wonder-drugs prevent acute kidney injury, a dreaded complication after cardiac surgery, associated with more than a three-fold increase in mortality?

For the video version of this post, click here.

Full disclosure: I research acute kidney injury, or AKI. So welcome to my world, statins – a world where positive clinical trials are as rare as an epidemiologist at a Mr. Personality contest.

The trial, out of Vanderbilt university hospital appeared in JAMA and enrolled 617 individuals undergoing cardiac surgery. The treatment group got 80mg of atorvastatin prior to surgery, and 40mgs a day after that. The placebo group got, well, placebo. As you might imagine, the majority of patients (400) were already taking a statin.  In that case, if you were randomized to placebo, you only got placebo for the day of surgery and the day after. After that, you were back on your home statin dose.

And to boil the results down to a word: nothing. The rate of AKI was 21% in the statin arm and 20% in the placebo arm. Looking at secondary outcomes, there were no differences in rates of death, dialysis, delirium, stroke, or stay in the ICU.

This might be expected in the group that was already on statins – after all, skipping two days of the drug might not be enough to make a real difference. But if we look at the statin-naïve group, the rate of AKI was 22% in the statin group and 13% in the placebo group. This was not statistically significant but the trend here is clearly in the wrong direction. In fact, if we look at absolute creatinine change –where higher levels are worse- those on the statin had a small, but statistically significant increase in creatinine compared to those on placebo.

But take these numbers with a grain of salt. The data safety and monitoring board forced the study authors to stop recruitment of statin-naïve patients about 2/3rds of the way through the study. They did this because they saw a signal of harm from statins in that group. So the fact that we see harm may be biased by the DSMBs choice to stop that part of the study early.

Now, would I have stopped the study if I were on the DSMB?  Probably. The odds that continuing to recruit would show a benefit of statin were really low, and you don’t want to expose patients to potential harm. But despite that, we can’t accept the results of an early-termination arm of a trial with the same gusto that we would had the trial continued to completion.  In short, the jury is still out as to whether statin use is actually harmful in terms of AKI. But I can say for now I’m pretty convinced it ain’t helping anybody.

 

Marijuana: Really a gateway drug?

pot.jpg

It's hard to figure out whether marijuana is a gateway drug, because those wet blanket bioethicists think it would be "wrong" to randomize teenagers to toking or not. But a technique called instrumental variable analysis may hold the key to determining causality in this situation. Take a look at my full blog post, which is hosted here at MedPage Today.

Pregnant women, don't stop eating fish!

pregnant-woman-eating-fish.jpg

Tuna, shark, king mackerel, tilefish, swordfish. If you’ve ever been pregnant, or known someone who has been pregnant, this list of seemingly random aquatic vertebrates is all too familiar to you. It’s the “avoid while pregnant” list of seafoods, and it’s just one of the confusing set of messages surrounding pregnancy and fish consumption.

(For the video version of this post, click here).

Because aren’t we supposed to be eating more fish? Fish are the main dietary source for omega-3 fatty acids, which can cross the placenta, and may promote healthy brain development. Of course, some of these fish contain mercury which, as Jeremy Piven taught us all, may be detrimental to cognitive development.

Thankfully not while pregnant

These contradictory facts led the US FDA, in 2014, to recommend that pregnant women consume more fish, but not more than 3 times a week.  You have to love the government sometimes.

A study appearing in JAMA pediatrics is making some waves with its claim that high levels of fish consumption, more than 3 times per week during pregnancy, is associated with more rapid neonatal growth as well as higher BMIs throughout a child’s young life. Now, contrary to what your mother-in-law has been telling you, more rapid infant growth is not necessarily a good thing, as rapid infant growth is associated with overweight and obesity in childhood and adulthood.

But fish as the culprit here? That strikes me as a bit odd. Indeed, prior studies of antenatal fish consumption have shown beneficial or null effects on childhood weight gain.  What is going on here?

The authors combined data from 15 pregnancy cohort studies across Europe and the US, leading to a final dataset including over 25,000 individuals. This is the studies greatest strength, but also its Achilles heel, as we’ll see in a moment.

But first the basic results. Fish consumption was based on a food frequency questionnaire, a survey instrument that I, and others, have a lot of concerns about. Women who reported eating less than or equal to 3 servings of fish a week had no increased risk of rapid infant growth or overweight kids.  But among those eating more than 3 servings, there was around a 22% increased risk of rapid growth from birth to 2 and overweight at age 6.

These effects were pretty small, and, more importantly, ephemeral. The authors looked not only at the percentage of obese and overweight children, but the raw differences in weight. At 6 years, though the percent of overweight and obese kids was statistically higher, there was no significant weight difference between children of mothers who ate a lot of fish and those who didn’t. When statistics are weird like this, it usually suggests that the effect isn’t very robust.

In fact, this line from the stats section caught my eye, take a look:

methods

That means the authors used numbers predicted by a statistical model to get the weight of the children rather than the actual weight of the children. I asked the study’s lead author, Dr. Leda Chatzi, about this unusual approach and she wrote “Not all cohorts had available data on child measurement at the specific time points of interest… in an effort to increase sample size and…power in our analyses, we…estimated predicted values of weight and height”.

So we have a statistical model that contains as a covariate, another statistical model. This compounds error into the final estimate, and in a study like this, where the effect size is razor thin, that can easily bias you into the realm of significance.

Pimp My Ride bias

And, at this point it probably goes without saying, but studies looking at diet are always confounded. Always. While the authors adjusted for some things like maternal age, education, smoking, BMI and birth weight, there was no adjustment for things like socio-economic status, sunlight exposure, diabetes, race, or other dietary intake.

What have we learned? Certainly not, as the authors suggest, that

no. just no.

That they wrote this in a study with no measurement of said pollutants is what we call a reach.

Look, you probably don’t want to be eating fish with high levels of mercury when you are pregnant. But if my patients were choosing between a nice bit of salmon and a cheeseburger, well, this study doesn’t exactly tip the scales.

 

Spray this up your nose and you'll take more risks in social situations. No - it's not that.

SH_example_hunters-V-1-5001.jpg

For the video version of this post, click here. You and a stranger are sitting, unable to see each other, in small cubicles, and 200 dollars is at stake. Its called the stag hunt game. You each can choose to hunt stag or rabbit. Hunting a stag requires two people, but gives you that big $200 payoff. Choosing rabbit gets you $90 if you hunt alone or $160 if your partner chooses rabbit too. Get it? The risky choice requires cooperation.

That a single intranasal shot of vasopressin could affect this decision seems crazy, but nevertheless that's what researchers publishing in the Proceedings of the National Academy of Sciences found.

Now, I’m a kidney doctor. To me, vasopressin is the hormone that makes you concentrate your urine. But neuroscientists have found that vasopressin exerts a diverse set of effects in the brain – stimulating social affiliation, aggression, monogamy, and among men paternal behaviors.

The experiment took 59 healthy men and randomized them to receive 40 units of vasopressin or placebo intranasally. They then played the stag hunt game. Those who got the vasopressin were significantly more likely to go all in – choose the stag option, then those who got placebo.

Vasopressin's effects on the stag hunt game

The elegant part of the experiment was the way the researchers tried to pin down exactly why this was happening.  It wasn't just that vasopressin made you more tolerant of risk. They proved this by having the men choose between a high-risk high-reward and low-risk low-reward lottery game. Vasopressin had no effect.  It wasn't that vasopressin made you feel euphoric, or wakeful, or calm – self-reported measures of those factors didn't change.

Vasopressin didn't make you more trusting. When asked whether they thought their silent partner would choose "stag" over "rabbit", the presence of vasopressin didn't change the answer at all. No, the perception of risk didn't really change, just the willingness to participate in this very specific type of risky behavior which the authors refer to as "risky cooperative behavior".

Risky cooperative behaviors are basically anything that you do that requires you to trust that other people will also do for mutual benefit. In short – vasopressin may be the hormone that gave rise to modern society.

How does it work? Well, a simultaneous fMRI study demonstrated decreased activity in the dorsolateral prefrontal cortex among those who got vasopressin. This part of the brain has roles in risk-inhibition, high-level planning, and behavioral inhibition, so vasopressin downregulating this territory makes some sense when you look at the outcome.

But the truth is that an understanding of myriad neuro-electro-chemico-hormonal influences into choosing whether to hunt stag or rabbit is beyond the scope of this study. Still, for a believer in free-will such as myself, studies like this are always a stark reminder that it isn't necessarily clear who is in the driver's seat when we make those risky decisions.

Marijuana use and brain function in middle age

weeee.jpg

For the video version of this post, click here. The public attitude towards marijuana is changing. Though some continue to view the agent as a dangerous gateway to harder drugs like cocaine and heroin, increasing use of the drug for medical purposes, and outright legalization in a few states will increase the number of recreational pot users. Its high time we had some solid data on the long-term effects of pot smoking, and a piece of the puzzle was published today in JAMA internal medicine.

Researchers leveraged an existing study (which was designed to examine risk factors for cardiac disease in young people) to determine if cumulative exposure to marijuana was associated with impaired cognitive function after 25 years. Note that I said "impaired cognitive function" and not "cognitive decline". The study didn't really assess the change, within an individual, over the 25-year period. It looked to see if smokers of the ganj had lower cognition scores than non-smokers.

That minor point aside, some signal was detected. After 25 years of followup, individuals with higher cumulative use had lower scores on a verbal memory test, a processing speed test, and a test of executive function.

But wait – those numbers are unadjusted. People with longer exposure time to weed were fairly different from non-users. They were less likely to have a college education, more likely to smoke cigarettes, and, importantly, much more likely to have puffed the magic dragon in the past 30 days.

Accounting for these factors, and removing from the study anyone with a recent exposure to the reefer showed that longer cumulative exposure was only associated with differences in the verbal learning test. Processing speed and executive function were unaffected.

Now, the authors make the point that there was a dose-dependent effect with "no evidence of non-linearity". What that is code for is that there isn't a "threshold effect". According to their model, any pot would lead to lower verbal scores. Take a look at this graph:

Verbal memory scores based on cumulative pot exposure

What you see is a flexible model looking at marijuana-years (by the way, one year means smoking one doobie a day for 365 days). The authors' point is that there isn't a kink in this line – the relationship is pretty linear. But look at the confidence intervals. The upper bound doesn't actually cross zero until five years. In short, the absence of an obvious threshold doesn't mean that no threshold exists. It is likely that the study was simply underpowered to detect threshold effects.

The most important limitation, though, was that the authors didn't account for age-of-use on the cognitive outcomes. With emerging evidence that pot-use at younger ages may have worse effects on still-developing brains, this was a critical factor to look at. Five years of pot exposure may be much different in a 25-year old than in an 18-year old. This data was available – I'm not sure why the interaction wasn't evaluated.

In the final analysis, I think we can confirm what common sense has told us for a long time. Pot certainly isn't magical. It is a drug.  It's just not that bad a drug. For the time being, the data we have to work with is still half-baked.