Migraine: A New Cardiovascular Risk Factor?

Migraine.jpg

I’m going to get personal here.  I had my first migraine - in my life - about three weeks ago. For those of you who have been longtime sufferers, I am truly sorry.  I was literally testing my own neck stiffness to make sure I didn’t have meningitis. But aside from the blistering pain knocking you out of commission for several hours (or several days), a new study appearing in the BMJ suggests there is something else migraine sufferers need to worry about – cardiovascular disease.

For the video version of this post, click here.

Researchers used data from the Nurses Health Study 2, a large, questionnaire-based prospective cohort study that began back in 1989 and enrolled over 100,000 nurses. The idea here was that the nurses (all female by the way) would be more reliable when answering health-related questionnaires than the general public.

In 1989, 1993, and 1995 the questionnaire asked if the women had been diagnosed, by a physician, with migraine. That’s it. No information on treatment, severity, or the presence of aura – a factor that has been associated with cardiovascular disease in the past.

This response was linked to subsequent major cardiovascular events including heart attack, stroke, and coronary interventions.

The researchers found a higher rate of this outcome among those who had been diagnosed with migraine. In fact, even after adjusting for risk factors like age, cholesterol, diabetes, hypertension, smoking, and more, the risk was still elevated by about 50%. So those of us with migraines – is it time to freak out?

Not too much.  The overall rate of major cardiovascular events in this cohort was just over 1% - not exactly common. That means the absolute risk increase is 0.5%, which doesn’t sound quite as dramatic as the 50% relative risk increase.  Putting that another way, for every 200 patients in this cohort with migraine, there was one extra case of cardiovascular disease.  Not exactly a risk factor to write home about.

But, to be fair, cardiovascular disease gets more common as we age – had the study had even longer follow-up, we might have seen a higher event rate.

Other studies have found similar findings with migraine. The women’s health study, for instance, found a nearly two-fold increased risk in cardiovascular events, but only in those who had migraine with aura – a covariate missing from the current dataset.

Should women with migraine take precautions against cardiovascular disease? The jury is out. Since we don’t know the mechanism of the link, if any, we don’t know the best way to treat it.  But clearly any studies of migraine therapy would do well to keep an eye on cardiovascular endpoints.

Now or Never? When to Start Dialysis for Acute Kidney Injury

Timing-is-everything.jpg

An embarrassment of riches this week as we got not one, but two randomized clinical trials evaluating the timing of dialysis initiation in acute kidney injury.  Of course, the results don’t agree at all. Back to the drawing board folks. For the video version of this post, click here.

OK here’s the issue – we nephrologists see hospitalized people with severe acute kidney injury – the abrupt loss of kidney function – all the time.  There is always this question – should we start dialysis before things get too bad, in order to get ahead of the metabolic disturbances, or should we hold off, watch carefully, and jump when the time is right? Several people – and, full disclosure, I’m one of them – have examined this question and the answers have been confusing.  The question begs for a randomized trial.

And, as I mentioned, we have two.  One, appearing in the Journal of the American Medical Association, says yes, earlier dialysis is better with mortality rates of 40% in the early arm versus 55% in the late arm. The other, appearing in the New England Journal of Medicine says there is no difference – mortality rates were 50% no matter what you did.

2

Figure 1: JAMA Trial - GO TEAM EARLY!

3

Figure 2: NEJM Trial - D'Oh!

Sometimes, rival movie studies put out very similar movies at the same time to undercut each other’s bottom line. So which of these trials is Deep Impact, and which is Armageddon? Which is Ed TV and which is the Truman show? Which is Jobs and which is Steve Jobs?

jobss

Figure 3: It's like looking in a mirror...

 

In this table I highlight some of the main differences:

4

The NEJM trial was bigger, and multi-center, so that certainly gives it an edge, but what draws my eye is the difference in definitions of early and late.

The NEJM study only enrolled people with stage 3 AKI – the most severe form of kidney injury. People in the early group got dialysis right away, and the late group got dialysis only if their laboratory parameters crossed certain critical thresholds.  The JAMA paper enrolled people with Stage 2 AKI. In that study, early meant dialysis right after enrollment, and late meant dialysis started when you hit stage 3.

OK so definitions matter. The NEJM trial defined early the way the JAMA trial defined late. So putting this together, we might be tempted to say that dialysis at stage 2 AKI is good, but once you get to stage 3, the horse is out of the barn – doesn’t matter when you start at that point.

That facile interpretation is undercut by one main issue: the rate of dialysis in the late group.

5

See, one of the major reasons you want to hold off on dialysis is to see if people will recover on their own. In the JAMA study, enrolling people at stage 2 AKI, only 10% in the late group made it out without dialysis – and those people were dialyzed, on average, only 20 hours after randomization. In the NEJM study, using the more severe inclusion criterion, roughy 50% of the late group required no dialysis. To my mind, if 91% of the late group got dialysis, you’re not helping anybody – the whole point of not starting is so that you never have to start, not that you can delay the inevitable.

Regardless of your interpretation, these studies remind us not to put too much stock in any one study. They should also remind us that replication – honest to goodness, same protocol replication – is an incredibly valuable thing that should be celebrated and, dare I say, funded.

For now, should you dialyze that person with AKI? Take heart – there’s good evidence to support that you should keep doing whatever you’ve been doing.

Does Hope Hurt? Predicting Death at the End of Life

death-and-dying.jpg

"There's always hope". This is a statement I have used many times when discussing the care of a patient with a terminal illness, but I have to admit it always felt a bit pablum. I think it ends up being short hand for "none of us are ready to accept reality so here we are". A few years ago I stopped saying that when I believe a patient is terminally ill.  Instead I state that the patient has reached the end of his or her life, and its time to plan for that.

For the video version of this post, click here.

Because hope can harm dying patients. Hope leads to unnecessary medical interventions, invasive treatments, and delayed palliative care. Up until now, we haven't had great data on how physicians and caregivers perceptions of a patient's prognosis line up, and why they differ. Appearing in the Journal of the American Medical Association, a well-designed trial finally sheds some light on this issue.

Researchers at UCSF enrolled 227 surrogates of patients who were mechanically ventilated for at least 5 days in the ICU. Overall, 43% of these patients would die during their hospitalization. On that fifth day, the surrogate and the physician were asked, independently, what they thought the patient's chances of surviving the hospitalization were. A margin of 20% difference was classified as "discordant".

And 53% of the estimates were discordant. In the vast majority, 80%, of discordant cases, the surrogate caregiver was more optimistic than the physician.

What sets the study apart for me is that it didn't end with this fact.  Rather, using structured interviews, the researchers identified factors that led to this overly optimistic view. They fell in several broad categories. Most commonly cited was the sense that holding out hope – or thinking positively – would directly benefit the patient. One participant for instance stated "I almost feel like if I circle 50%, then it may come true. If I circle 50%... I'm not putting all my positive energy towards my dad".

The other explanations for discordance included a feeling that the patient has secret strengths unknown to the physician. And finally, religious beliefs – the idea that ultimately God would intervene on behalf of the patient, were also frequently cited.

As I mentioned, some surrogates were more pessimistic than the providers, and typically cited self-preservation for that outlook.  As one individual put it "Maybe I'm just trying to protect myself… I'm trying not to get too excited or… optimistic about anything".

Physician's prognoses were statistically better than surrogates at predicting the eventual outcome, but pride in this fact would be misplaced. "Doctor" comes from the latin word for teacher, and we need to do a better job educating patient's families about their loved-one's prognosis. Those conversations are hard, and offering some hope is what every empathetic human would do, but maybe it's time that, in some cases, we offer hope for a noble and peaceful death as opposed to a miraculous return to life.

Amyotrophic Lateral Sclerosis and Environmental Toxins: A New Link?

shutterstock_30797446-620x620.jpg

An article appearing in JAMA Neurology links exposure to certain environmental toxins, like pesticides to Amyotrophic Lateral Sclerosis (ALS or Lou Gehrig’s disease). While I could spend these 150 seconds talking about whether or not we should run home and clean all the Round-Up out of our garage, I’d like to take this chance to talk about 3 methodologic issues a study like this brings to the fore. For the video version of this post, click here.

But first, the details:

Researchers from the University of Michigan performed a case-control study of 156 individuals with ALS and 128 controls. They administered a survey, asking about all sorts of environmental exposure factors, and, importantly, they drew some blood to directly measure 122 environmental pollutants. The bottom line was that there did seem to be an association between some pollutants (like pentachlorobenzene – a now-banned pesticide) and ALS.

So – on to the three issues.

Number 1 – multiple comparisons. As I mentioned, the authors looked at over 100 pollutants in the blood of the participants. Given no effect of the pollutants, chance alone would leave you with several apparently statistically significant relationships.  In fact, a robust demonstration of the multiple comparisons problem is that lead exposure, in this study, was quite protective against ALS. This is not biologically plausible, but reflects that multiple comparisons can cut both ways – it can make measured factors seem to be positively, or negatively associated with the disease. Indeed, several pollutants seemed to protect against ALS.

The authors say they account for multiple comparisons, but I’m not sure this is true. In their statistics section, they write that they used a Bonferroni correction to lower the threshold p-value (from the standard 0.05 to 0.0018 to account for all the comparisons). But they never actually do this.  Rather, they report the odds ratios associated with the various pesticides and just don’t report the p-values at all, except in multivariable models where the Bonferroni correction isn’t used.

Number 2 – the perils of self-reported data. The survey exposure data – questions like “do you store pesticides in your garage?” and the measured blood data were hardly correlated at all. This should be read as a warning to anyone who wants to take self-reported exposure data seriously (I’m looking at you, diet studies). When in doubt, find something you can actually measure.

And Number 3 – the lack of variance explained. Studies like this one that look at risk factors for an outcome are building models to predict that outcome. The variables in the model are things like age, race, family history, and the level of pentachlorobenzene in the blood. It’s a simple matter of statistics to tell us how good that model fits – how much of the incidence of ALS can be explained by the model. We almost never get this number, and I suspect its because you can have a highly significant model that only explains, say, 1% of the variance in disease occurrence. It doesn’t make for impressive headlines.

So while we haven’t learned which, if any, organic pollutant causes ALS, hopefully we’ve learned something about the perils of risk factor research.

"Price Transparency" Doesn't Curb Spending In Medicine

health-costs.jpg

I love price transparency. When I book an airline seat, I will base my entire decision around the fact that one flight is $3 cheaper. Leg room be damned. But does price transparency work in the healthcare industry? A study appearing in the Journal of the American Medical Association may be telling us something important: Healthcare isn't like other industries.

For the video version of this post, click here.

What you will pay for a given office visit or procedure is a nebulous thing at the best of times. While websites have sprung up offering comparison shopping for things like mammograms, colonoscopies, and hernia repairs, it's often hard to know exactly how that advertised price will interact with your own insurance plan, deductible, and various co-pays. In other words, price shopping in medicine is really hard.

The JAMA study looked at two very large corporations that partnered with Truven Health Analytics (recently purchased by IBMs Watson group) to give their employees access to a robust cost-comparison tool.  The cool part about the tool is that it included information about your own health plan, including how much of your deductible you'd spent so far, to give really accurate estimates of out-of-pocket and total costs for various procedures and visits.

The researchers compared spending habits among employees in the year prior to the tool being available with the year after, using matched controls to account for secular trends.

And it didn't work.  At least, if the hope was to get people to spend less on healthcare.  In fact, those who had access to the tool spent a bit more than those who didn't (roughly 60 dollars a year more – not much, but hardly the "billions saved" that the Truven website promises). Moreover, the transparency tool users were more likely to use pricey hospital-based outpatient departments instead of freestanding clinics.

The more interesting questions is – why?

Well, for one thing, not many people bothered to use the tool – about 10% of employees tried it out in that first year. Additionally, the tool reported both out-of-pocket and total costs. It's conceivable that, when presented with the same out-of-pocket cost, a reasonable human might choose the service with a higher total cost – after all that's the better deal, right? The researchers point out that most of the searches on the web tool were for procedures that would exceed the deductible, making price-shopping more or less moot.

Finally, let's not forget that healthcare is not really a commodity. Patients like their doctors, their health system. There is real value in getting care all in one place.

So healthcare is not where the airline industry is, which I'm sure is a relief to hospital CEOs nationwide. For price transparency to really matter, we would need a radical change to our insurance policies. But that is something most patients, and most politicians wouldn’t buy.

Arsenic in the Baby Food - Time to Panic?

12-3-rice-health-toxic-metals.jpg

Giving a baby their first bite of real food – it’s an indelible memory. That breathless moment as you wait to see whether it will be swallowed or unceremoniously rejected, the look of astonishment on their little face. For many of us, that first bite was rice cereal – gentle on the stomach, easy to mix with breast milk or formula, safe, trusted, traditional. Well it turns out we’ve been poisoning our children all along.  Well, at least that’s what a paper appearing in JAMA Pediatrics would have you believe.

For the video version of this post, click here.

The relevant background here is that arsenic, in sufficient quantities, kills you. And rice, in part because it is often grown in flooded paddys, concentrates arsenic. And between rice cereal, rice-based formula, and those little puffy rice treats, infants eat a fair amount of rice.

In this study, researchers from Dartmouth examined 759 infants enrolled in the New Hampshire Birth Cohort study. Rice consumption was pretty common – when surveyed at 12 months of age, the majority of babies had consumed some rice product within the past 2 days.

In a subgroup of 129 infants, the researchers examined total urinary arsenic levels and correlated them with food diaries taken at several points over their first year of life. Sure enough, the kids who had eaten more rice products had higher levels of urinary arsenic. Kids who had no rice consumption had an average urinary arsenic concentration of around 3 parts per billion, compared to around 6 parts per billion among those who had been eating white or brown rice. Breaking it down farther, the highest arsenic levels were seen in kids eating baby rice cereal – around 9 parts per billion.

But… does it matter? The CDC lists arsenic as a known carcinogen, but it is often hard to find precise toxic dose numbers.  Here’s what I’ve dug up.  It looks like the lethal dose is around 2mg/kg. To get that dose, a 5 kilogram infant would need to ingest, in a short period of time, roughly 50 kilograms of strawberry flavored puffed-grain snacks.  That was the food with the highest arsenic levels in this study.

But chronic, sub-lethal exposure to arsenic may also be harmful. As I mentioned above, arsenic is a known carcinogen. There is also some mixed data that suggests that high arsenic exposure can lead to lower intelligence scores in children, though the levels measured in those studies are about ten times what we see here.

The bottom line is, we don’t know if this is a big problem. My impression is that arsenic contamination of drinking water is more problematic than the arsenic content of foods.  So yeah, avoiding rice-containing products may get the arsenic levels in infants from very low to very very very low, but what shall we give them instead? Arsenic is just one potential toxin in one group of foods. In this modern world, you may have to pick your poison.

New Drugs Hold Real Promise for Metastatic Melanoma

cancer-389921_1920.jpg

I'm going to show you a survival curve for metastatic melanoma. Survival rate in metastatic melanoma

This data was analyzed in 2001, but sadly, even current 5 year-survival for metastatic melanoma sits around 15%. But some new drugs might change this.

For the video version of this post, click here.

Here's a chart examining Melanoma-associated mortality rates over time:

Death rates in advanced melanoma

Compare that to breast cancer, which has seen some dramatic therapeutic advances over the past few decades:

Breast cancer mortality rate is declining

But melanoma is riding a wave of novel immunotherapies that hold promise to change the treatment landscape substantially.

Appearing in the Journal of the American Medical Association is a type of study we don't see too much of these days.  It's not really a clinical trial. It's not really a meta-analysis.  Frankly, I'm not sure what to call it – an aggregate analysis perhaps?

The study examines 655 patients treated with the PD-1 inhibitor pembrolizumab from 2011 to 2013. Yup, that's the same pembrolizumab which was used so successfully to treat this charming former president:

Malaise my ass

A brief aside here. Pembrolizumab is a monoclonal antibody directed towards programmed cell death protein 1, PD-1. PD-1 acts to prevent immune cells from attacking your own cells – it's an immune "checkpoint" making pembrolizumab one in a class of "checkpoint inhibitors".  Basically, by blocking PD-1, pembrolizumab allows your immune system to attack your own cells. Not something you want under ordinary circumstances, but perhaps beneficial when your own cells have turned against you.

Merck has bet big on pembrolizumab, with clinical trials ongoing or planned in melanoma, non-small-cell lung cancer, small-cell lung cancer, ovarian cancer, glioma, colorectal cancer, and on and on. What happens when a company is doing so many trials like this is a kind of fractionation, where you lose the aggregate knowledge of patient experiences because they are spread out across so many trials.

So I was gratified to see this aggregate analysis which examined patients with advanced melanoma receiving pembrolizumab across four different trials. See, if you do four trials, and one is nice and positive, and the others are equivocal, and you are a for-profit drug company, maybe you're more likely to try to get that positive trial into some high-profile journal, and let the others either languish in peer-review hell or get published in an out-of-the-way rag.

What we get in JAMA, though, is a study with adequate power to demonstrate that pembrolizumab might make a difference.

Among all the patients treated with pembrolizumab (and yes, there is no control group reported here), the objective response rate was 33%. The median overall survival was 23 months, and 31 months among those for whom pembrolizumab was the first systemic cancer therapy.  Compared to the historical median survival of under a year, this represents a substantial improvement.

Interestingly, among those who responded to the drug initially, the duration of response was fairly long. In fact, at 2 years, around 70% of people who initially responded to the drug were still responding.  This is a good thing, as it demonstrates that development of resistance to therapy might be limited.

Now, before we bestow too many accolades on Merck for giving us this aggregate data, we might ask whether they would have been as forthcoming if the trials weren't quite as successful. But, placing cynicism aside for the moment, it seems that this drug, or one of its competitors will have a place at the table in the treatment of advanced melanoma.

 

 

Peanuts, Peanut Avoidance, and the Development of Allergy

peanuts.jpg

I love a nice clinical trial that answers an important question and one of my favorites from the recent past was the “Learning Early About Peanut allergy” or LEAP trial, published in February of 2015 in the New England Journal.  I probably don’t need to reiterate the results of this truly landmark study, but basically, it upended about two decades worth of advice to parents to avoid exposing their infants to food containing potential allergens, such as peanuts.

For the video version of this post, click here.

The trial, which enrolled infants at high risk of peanut allergy, found that the rate of peanut allergy at 5 years was 18.8% among those randomized to peanut avoidance, but only 3.6% among those randomized to peanut consumption.  That’s a number needed to treat of around 7 making eating peanut products in the first five years of life about 7 times more efficacious than taking aspirin for an ST-Elevation MI. OK apples and oranges, or peanuts, but still.

But lingering questions remained.  Would these kids be protected in the long-term? Did the study just kick the peanut allergy ball down the field?

To answer the question, the LEAP researchers conducted the LEAP-ON study, in which individuals in the initial study were instructed to avoid all peanut products for 12 months. Without exposure to peanuts, would allergy come roaring back? Would these kids be doomed to eat peanuts three times a week for the rest of their lives?

Well, around 90% of the original trial participants signed on to the no-peanuts-for-12-months pledge. Overall, adherence was OK. As you might expect, those who had originally been randomized to avoid peanuts had an easier time staying off the sauce – 80% of them reported complete peanut avoidance. Only 40% of those who had been randomized to eat peanuts originally were able to stay away for the year. No shame there, peanuts are delicious.

Bottom line, after 12 months of avoidance there were 6 new cases of peanut allergy, but three from each group. In other words, you didn’t see a “rebound” in peanut allergy among those kids initially randomized to eating peanuts.  By the end of this study, 18.6% of those who had initially avoided peanuts and 5% of those who had eaten peanuts from a young age had confirmed allergy.

The point here is that the protection from allergy conferred from early exposure to peanuts persisted even through a year of not eating peanuts. This is a very good thing for the rare kid out there who doesn’t like peanuts – it seems like the protection you gained in infancy will stick around.

Now, I should mention that there was no control group here. I’m curious what might have happened to kids instructed to keep right on eating lots of peanuts. We also don’t know if avoidance for more than a year might let allergy recrudesce.

But taking this study with the results of the original trial, it’s not exactly a leap to say that early exposure to peanuts might dramatically curb the rising tide of peanut allergy in the developed world.

Huge Chinese Study Suggests 20% of Heart Disease due to Low Fruit Consumption

9998-a-beautiful-woman-eating-an-apple-pv.jpg

A 柚子 a day keeps the doctor away? Appearing in the New England Journal this week is a juicy study  that suggests that consuming fresh fruit once daily can substantially lower your risk of cardiovascular disease. In fact, the study suggests that 16% of cardiovascular death can be attributed to low fruit consumption. For those of you keeping score, that's pretty similar to the 17% of cardiovascular deaths that could be prevented if older people stopped smoking.

For the video version of this post, click here.

What we're dealing with here is a prospective, observational cohort of over 500,000 Chinese adults without a history of cardiovascular disease.  At baseline, they were asked how often they consumed a variety of foods, and gave a qualitative answer. Most of the analyses compare people eating fruit "daily" to those who ate fruit "rarely or never".

Those fruit-eaters were substantially different from the non-fruit eaters, but not, perhaps, in the way you might expect.  For example, waist circumference and BMI were higher in the fruit-eaters and fruit-eaters were much more likely to live in urban rather than rural areas. Fruit-eaters also ate more meat, all suggesting that, in China at least, eating more fruit might be a marker of better nutrition overall. Reporting the cardiovascular effects of more frequent eating of other foods would reveal whether this is the case, but that data was not shown.

More in line with our Western expectations, fruit-eaters had a substantially higher income, more education, and were less likely to smoke or drink alcohol.

After more than 3 million person-years of follow-up, there were 5,173 cardiovascular deaths. If you followed a group of 1000 fruit-eaters for a year, you'd expect less than 1 cardiovascular death. Following a similar-sized group of never-fruit eaters, you'd expect 3.7 deaths.

These observations withstood adjustment for socioeconomic factors, smoking, physical activity, BMI and consumption of other types of food, though unmeasured confounding always plays a role in dietary studies.

Why does it work? We don't know.  Though the frequent fruit-eaters had lower blood pressure and lower blood sugar, these factors did not explain the protective effects of the fruit.

Indeed, maybe it's not something in fruit that is beneficial at all, but something that isn't. Like sodium.  Fresh fruit isn't salty and salt-intake was not captured in this study. Missing data like that makes it hard to trust that the observed relationship is truly causal.

Still, there isn't much harm in advising patients to eat fresh fruit more regularly, which is I suppose, what makes studies like these so appealing.

Naltrexone to Prevent Opioid Relapse: A New Weapon in the Fight

129378493.0.jpg

There are a few experiences nearly every physician remembers. Delivering that first baby, running that first code, a stranger showing you a rash at a dinner party. Some things are universal. Likewise the first time you injected naloxone into someone suffering from an opioid overdose. For the non-medical folks, naloxone blocks the receptor in the brain that gives opiates their punch. Injecting someone with the stuff essentially puts them into immediate, full-blown withdrawal. It’s lifesaving, but rough. Think that scene from Trainspotting.

For the video version of this post, click here.

Naltrexone is naloxone’s longer-acting cousin. The oral formulation has a half-life of around four hours, and there have been several studies examining the use of this drug to treat opiate addiction. But overall the results have been disappointing. Skipping a dose is simply too easy.

But we may have a new weapon in the fight against opioid abuse in the form of extended-release naltrexone, a depot injection that lasts roughly a month. A randomized trial appearing in the New England Journal of Medicine examined the efficacy of this agent in a group of previously incarcerated individuals with opioid dependence. Needless to say, this group is at high-risk of relapse.

In this open-label trial, 153 individuals received monthly injections of extended-release naltrexone for six months, and 155 received what you might call usual care (basically counseling and referral to community support programs). The primary outcome was time to relapse – defined, liberally in my opinion, as ten days of self-reported opioid use or two consecutive positive bimonthly urine specimens.

And relapse rates were high – 43% in the extended-release naltrexone group and 64% in the usual care group. Still, that difference translates into a number needed to treat of 5 – meaning this drug is actually pretty efficacious.

Though time to relapse was the primary outcome, I was more interested in some of the ancillary measures. For instance, there was no difference in the re-incarceration rate in the two groups: both were around 25%. Naltrexone has also been advocated as a drug that might curb binge-drinking, but there was no difference in self-reported heavy drinking which occurred in about 15% of both groups.

After six months, naltrexone injections were stopped, but the participants were followed out for a year after that. Without the injections, the relapse rate in the treatment group quickly caught up with the usual care group. This drug may be a long-term proposition.

Knowing the mechanism of action of the drug, it’s not surprising that gastrointestinal side effects were much higher in the treatment group. Fortunately, there were no overdose events in the treatment group, even after treatment ended. This is a critical finding, as there is a theoretical concern that treatment with naltrexone might condition individuals to take higher doses of opioids. When naltrexone is stopped, those higher doses can be fatal. That this effect wasn’t seen should encourage those of us who might want to use this agent in clinical practice.

I should point out that extended-release naltrexone was not compared to what many consider the more standard approach to therapy – namely buprenorphine or methadone maintenance, though that trial is proceeding apace.

In the meantime, this is what we have. And with the epidemic of opioid addiction growing worse by the day, I for one am glad to have one more weapon in the fight.