Ivermectin: Breakthrough Coronavirus Cure or Bad Science?
/And how we can figure out the difference once and for all.
This week, it’s time we talk about Ivermectin.
Since the first days of the COVID-19 pandemic, various existing drugs have been touted as near miracle cures for the disease. Often, the discussion of agents like hydroxychloroquine, lopinavir, and their ilk veered into the conspiratorial, squelching reasonable scientific discussion. Boosters would accuse detractors of hiding the truth of a safe and effective treatment at the behest of big pharma, or the deep state. Detractors would accuse boosters of bad data analysis and wishful thinking.
Enter Ivermectin, and this meta-analysis of randomized trials by Andrew Hill and his colleagues in Open Forum Infectious Diseases that seem to show that the drug has pretty remarkable efficacy against COVID-19.
But before we dig in, let’s put the mechanistic cards on the table.
Ivermectin is an anti-parasitic agent that has been used to treat scabies, river blindness, and filariasis among others. You may give it to your dog to prevent heartworm. Discovered in 1975, the drug has been in worldwide use for nearly five decades and appears on the WHO list of essential medicines.
Ivermectin binds to certain chloride channels on nerve and muscle cells, paralyzing the creature exposed to it. These channels are present in worm and insect nervous systems, which is why the drug works.
Humans have the channels too, but only in our brains and spinal columns. Since Ivermectin can’t cross the blood brain barrier, we are spared from its effects.
But, you will note, that SARS-CoV-2 has no muscles or nerves. So why the interest in this drug for this virus?
A lot of the enthusiasm comes from this study, by a group that has done nice work showing that the drug may have anti-viral properties by affecting a protein called importin that a lot of viruses hijack for their own nefarious uses.
Researchers infected a cell culture with SARS-CoV-2 and added various concentrations of ivermectin. They then measured viral replication and found that the drug, in a petri dish at least, could dramatically inhibit the ability of the virus to reproduce.
But there’s a problem. The inhibitory concentration of the drug, around 2.5 micromolar, is not achievable in real live humans. In fact, standard Ivermectin dosing achieves blood concentrations of about 25 nanomolar – 100-fold less than what was needed in vitro. Lung concentrations are a bit higher than blood concentrations, but still 50-fold less than what is needed to inhibit the virus in cells in culture.
So, if Ivermectin is going to work in humans with COVID-19, it has to be via some other mechanism, anti-inflammation or something. But as a starting point, biological plausibility here is not high.
But that never stopped us before. Multiple clinical trials at this point have evaluated the drug in COVID-19, and according to this meta-analysis at least, the results are compelling.
The authors combined data from 24 randomized trials of Ivermectin, a total of 3,328 patients, to examine a variety of outcomes ranging from resolution of symptoms to death. I’m going to focus on mortality because it’s a pretty important endpoint, but the results for other outcomes are broadly similar.
11 trials, with about 2000 patients total, had death data available. Combining them gave a death rate of 3% in the Ivermectin arm, and 8.7% in the comparator arm, a statistically significant result.
These forest plots can be a bit tough to read, but basically each bar represents the effect of a single study and the diamond is the overall effect. Anything that crosses the line at 1 is not statistically significant. So you have several studies trending towards significance, that, when combined, give you that overall final result. This is how meta-analyses work.
But there is a bit of a problem here.
Remember that the data you get out of a meta-analysis is only as good as the data you get in. And there are some things to criticize about this data.
The authors aggregated data from studies that were peer-reviewed, those that were hosted on pre-print servers, and those whose results they obtained through a network of researchers interested in ivermectin – even if the studies hadn’t been published anywhere. I broke down the mortality results by publication status here.
I’m worried about a few things here. First, inclusion of completely unpublished studies is really problematic since there is no way for anyone to vet the results. Its possible that those people running trials with promising data are even more likely to provide that to the meta-analyzers than those whose trials turned up nothing.
I do get why you might want to include pre-prints in your meta-analysis, peer-review is slow and the pandemic is happening fast, but peer review really does have a purpose. Some of these studies will probably never get published, and not because dark forces are conspiring against ivermectin, they just have some real problems.
One study driving the mortality benefit is this one, out of Iran, hosted on a pre-print server.
It’s a six-arm randomized trial of patients hospitalized with COVID-19. But it’s weird. According to Table 1, 29% of them were PCR-negative for SARS-CoV-2. What’s worse, this percentage is much higher in the two control groups (47%) compared to the Ivermectin Groups (20%). My math suggests that such a discrepancy would only occur 2 out of 10,000 times if randomization was, well, random.
I’m not saying this is fake or anything, but this is exactly the sort of thing that peer-review would pick up on, and give the authors a chance to correct. And yet here, this trial is given equal weight to all the others.
The other trial that seems to drive these results, also not yet peer-reviewed is the Elgazaar trial out of Egypt.
Here, ivermectin was compared with hydroxychloroquine in 400 individuals with COVID-19. The results were pretty stark, in terms of death. In the moderately ill group, there were no deaths in the ivermectin group, 4 in the hydroxychloroquine group. In the severely ill group, 2 deaths in the ivermectin group, 20 in the hydroxychloroquine group.
Now, you’ll note that hydroxychloroquine is not placebo, so there is some chance that what we are seeing is a sign that hydroxychloroquine is bad, not that ivermectin is good, but – inspired by the Iran paper, I went ahead and looked at table 1 again. Multiple statistically significant imbalances across baseline characteristics. Again, this is very unlikely if there wasn’t some failure of randomization.
All I’m doing is some peer review of a study that has not yet been peer reviewed. I’m not saying its wrong, but review would allow the authors to provide an explanation, or maybe even re-analyze their results. Mistakes get made all the time in research – it helps to have a critical eye.
Others have noted that if you remove the Iran and Elgazaar papers the protective effect of ivermectin in this metanalysis disappears.
Which doesn’t mean ivermectin isn’t useful. What it means is we still don’t know. We don’t have ironclad evidence. The meta-analysis authors note that there are multiple large clinical trials going on right now that should seal the deal one way or the other, but what do we do until then?
Well, the easy answer is to say just cut the gordian knot and get vaccinated – then you won’t even need ivermectin. But there are plenty of places around the world where vaccines are not available – and ivermectin is.
So, here’s my pitch. All of these trialists should, in the interest of public health, release not just their results, but their analytic datasets, in a deidentified format, to a site like datadryad.com, so the community can review them directly. Instead of trying to parse what the authors mean in this or that sentence in the manuscript, just share the data. We’ll know right away if we should believe it or not. This is easy, transparent, and perfectly legal. All of us, ivermectin boosters or ivermectin detractors should speak with one voice on this – we are in a public health emergency - release the data so we can know, for sure, what to believe.
A version of this commentary first appeared on medscape.com.