Bayes Factors, Miracle Cures, and the FDA

Bayes Factors, Miracle Cures, and the FDA

A study appearing in Plos One suggests that a statistic called the Bayes Factor will keep the FDA from approving bad drugs. It won't, but rigorous defense of their regulatory powers will. For the video version, click here.

Read More

Solanezumab is not a breakthrough for Alzheimer's patients.

old-63622_640.jpg

For the video version of this post, click here. Is there a breakthrough drug, that, unlike currently available medications for Alzheimer’s disease – targets the disease itself, and not just the symptoms?  That’s what various news outlets are reporting after the publication of a study in the Journal Alzheimer’s and Dementia evaluated the anti-amyloid antibody solanezumab.

But, in my opinion, this is flat out wrong.  In fact, the incredibly dense article, which reads somewhat like a love letter to the FDA, could be a lesson in how to try to get your failed drug approved.  Here’s the story:

Eli Lilly ran two phase 3 trials of Solanezumab in patients with mild to moderate Alzheimer’s disease.  This is pretty standard practice, as the FDA requires two independent randomized trials to grant a new drug approval.  Expedition 1 failed to meet either of its two primary endpoints – performance on the Alzheimer’s Disease Assessment Scale Cognitive Subscale or the Alzheimer’s Disease Cooperative Study Activities of Daily Living Inventory.

Of course, the study looked at tens of outcomes, and, in Expedition 1, it appeared that there was a significant improvement in a different cognitive scale.  Expedition 2 was still going on, so they changed the primary outcome of Expedition 2 to be this new scale.  Clever.  But, swing and a miss – in Expedition 2 this outcome was not significantly different between the groups.

But – by combining the results of Expedition 1 and 2 and – and this is starting to seem desperate frankly – limiting the analysis to just those with mild Alzheimer’s at baseline – they were able to finally demonstrate a statistically significant difference.  This is not surprising, because the new endpoint was chosen based on the outcome of Expedition 1.

Now hopelessly out of the realm of quality trials, the company (and this paper is almost entirely authored by Lilly employees), performed what’s called a “delayed-start” analysis.  After Expedition 1 and 2, participants randomized to placebo could stay on and switch to solanezumab.  So, the argument goes, you have an early-start group and a new “delayed-start” group.  The argument they are trying to make is that, if the “delayed-start” group catches up to the performance of the early-start group, the drug is merely masking symptoms.  If, instead, they fail to catch up, then the drug is fundamentally affecting the disease process itself.

The delayed start group didn’t catch up, at least according to the incredibly broad definition of “catch up” used in this study.  The authors’ conclusion? Our drug targets the disease itself.  Cue press release and breathless excitement.

Listen, I really wish this were true.  But the likelihood is that this drug just doesn’t work - not in a way that will matter to patients at least.  You can tweak the statistics, you can market it however you want, but the data is the data. The major lesson we learn from this paper is how modern clinical trial design allows for many ways out when your primary hypothesis isn’t supported. We’ve got to stay skeptical. And if staying skeptical doesn’t work – try some other outcome – maybe the FDA will approve that.