Marijuana use and brain function in middle age

weeee.jpg

For the video version of this post, click here. The public attitude towards marijuana is changing. Though some continue to view the agent as a dangerous gateway to harder drugs like cocaine and heroin, increasing use of the drug for medical purposes, and outright legalization in a few states will increase the number of recreational pot users. Its high time we had some solid data on the long-term effects of pot smoking, and a piece of the puzzle was published today in JAMA internal medicine.

Researchers leveraged an existing study (which was designed to examine risk factors for cardiac disease in young people) to determine if cumulative exposure to marijuana was associated with impaired cognitive function after 25 years. Note that I said "impaired cognitive function" and not "cognitive decline". The study didn't really assess the change, within an individual, over the 25-year period. It looked to see if smokers of the ganj had lower cognition scores than non-smokers.

That minor point aside, some signal was detected. After 25 years of followup, individuals with higher cumulative use had lower scores on a verbal memory test, a processing speed test, and a test of executive function.

But wait – those numbers are unadjusted. People with longer exposure time to weed were fairly different from non-users. They were less likely to have a college education, more likely to smoke cigarettes, and, importantly, much more likely to have puffed the magic dragon in the past 30 days.

Accounting for these factors, and removing from the study anyone with a recent exposure to the reefer showed that longer cumulative exposure was only associated with differences in the verbal learning test. Processing speed and executive function were unaffected.

Now, the authors make the point that there was a dose-dependent effect with "no evidence of non-linearity". What that is code for is that there isn't a "threshold effect". According to their model, any pot would lead to lower verbal scores. Take a look at this graph:

Verbal memory scores based on cumulative pot exposure

What you see is a flexible model looking at marijuana-years (by the way, one year means smoking one doobie a day for 365 days). The authors' point is that there isn't a kink in this line – the relationship is pretty linear. But look at the confidence intervals. The upper bound doesn't actually cross zero until five years. In short, the absence of an obvious threshold doesn't mean that no threshold exists. It is likely that the study was simply underpowered to detect threshold effects.

The most important limitation, though, was that the authors didn't account for age-of-use on the cognitive outcomes. With emerging evidence that pot-use at younger ages may have worse effects on still-developing brains, this was a critical factor to look at. Five years of pot exposure may be much different in a 25-year old than in an 18-year old. This data was available – I'm not sure why the interaction wasn't evaluated.

In the final analysis, I think we can confirm what common sense has told us for a long time. Pot certainly isn't magical. It is a drug.  It's just not that bad a drug. For the time being, the data we have to work with is still half-baked.

Solanezumab is not a breakthrough for Alzheimer's patients.

old-63622_640.jpg

For the video version of this post, click here. Is there a breakthrough drug, that, unlike currently available medications for Alzheimer’s disease – targets the disease itself, and not just the symptoms?  That’s what various news outlets are reporting after the publication of a study in the Journal Alzheimer’s and Dementia evaluated the anti-amyloid antibody solanezumab.

But, in my opinion, this is flat out wrong.  In fact, the incredibly dense article, which reads somewhat like a love letter to the FDA, could be a lesson in how to try to get your failed drug approved.  Here’s the story:

Eli Lilly ran two phase 3 trials of Solanezumab in patients with mild to moderate Alzheimer’s disease.  This is pretty standard practice, as the FDA requires two independent randomized trials to grant a new drug approval.  Expedition 1 failed to meet either of its two primary endpoints – performance on the Alzheimer’s Disease Assessment Scale Cognitive Subscale or the Alzheimer’s Disease Cooperative Study Activities of Daily Living Inventory.

Of course, the study looked at tens of outcomes, and, in Expedition 1, it appeared that there was a significant improvement in a different cognitive scale.  Expedition 2 was still going on, so they changed the primary outcome of Expedition 2 to be this new scale.  Clever.  But, swing and a miss – in Expedition 2 this outcome was not significantly different between the groups.

But – by combining the results of Expedition 1 and 2 and – and this is starting to seem desperate frankly – limiting the analysis to just those with mild Alzheimer’s at baseline – they were able to finally demonstrate a statistically significant difference.  This is not surprising, because the new endpoint was chosen based on the outcome of Expedition 1.

Now hopelessly out of the realm of quality trials, the company (and this paper is almost entirely authored by Lilly employees), performed what’s called a “delayed-start” analysis.  After Expedition 1 and 2, participants randomized to placebo could stay on and switch to solanezumab.  So, the argument goes, you have an early-start group and a new “delayed-start” group.  The argument they are trying to make is that, if the “delayed-start” group catches up to the performance of the early-start group, the drug is merely masking symptoms.  If, instead, they fail to catch up, then the drug is fundamentally affecting the disease process itself.

The delayed start group didn’t catch up, at least according to the incredibly broad definition of “catch up” used in this study.  The authors’ conclusion? Our drug targets the disease itself.  Cue press release and breathless excitement.

Listen, I really wish this were true.  But the likelihood is that this drug just doesn’t work - not in a way that will matter to patients at least.  You can tweak the statistics, you can market it however you want, but the data is the data. The major lesson we learn from this paper is how modern clinical trial design allows for many ways out when your primary hypothesis isn’t supported. We’ve got to stay skeptical. And if staying skeptical doesn’t work – try some other outcome – maybe the FDA will approve that.