Lax FDA Regulation for Device Approvals
/For the video version, click here.
This week, we take a walk along the rocky path of medical device regulation with this study, appearing in the Journal of the American Medical Association.
Devices are regulated in a different way than drugs. Once approved, a drug doesn't really get changed. Indications may be expanded, but the drug is still the drug. But devices get modified frequently – think different pacemaker leads. [Edit 8/16/17: Swiss Statistician and Polymath Stephen Senn correctly pointed out that drugs are frequently changed after initial approval, particularly with regards to formulation and route of administration, with occasionally shocking effects. Follow him on twitter @stephensenn)
The FDA does not require a clinical trial demonstrating proof of safety and efficacy for all those changes – far from it. In fact, in the vast majority of cases the FDA requires no clinical data at all for approval.
One exception to that rule is for high-risk devices, which include things like cardiac stents. Modifications of these devices need to utilize the most rigorous standard, known as the "panel-track".
But, as the JAMA paper suggests, the panel-track is not really that rigorous. There have only been 78 panel-track approvals between 2006 and 2016, underscoring how rare it is for a manufacturer to use this pathway. In contrast, from 1979 through 2012 there have been 5800 non-panel-track supplements for cardiac implantable electronic devices alone.
According to the study, the data supporting the changes rarely measure up to the quality we might expect. Of 83 studies supporting these approvals, only 45% were randomized. Only 30% were blinded.
Almost a quarter didn't specify a primary endpoint. And shockingly, only 87% reported the number of patients enrolled. Only 84% reported the mean age of enrollees. These are pretty basic stats, folks.
Obviously, we could spin this data to make it look like the FDA is asleep at the wheel but before we grab our pitchforks, let me ask this question: Why were some studies randomized, and some not?
The reason is that there are civil servants at the FDA whose job it is to interface with manufacturers to decide how these studies should be conducted. They are charged with determining the "least burdensome" standard of data. That's the law. In other words, sometimes a blinded, randomized trial is the least burdensome thing you can do to make sure the device is still safe and effective. But not always. We'd need to review each of these 78 approvals separately to determine if we, as a medical community, think the data presented was inadequate.
I'm actually ok with this system, with one caveat. Rigorous post-approval research must be conducted to ensure safety, especially as indications are expanded. And here the FDA has not done a great job. The FDA has been lax about enforcing post-marketing surveillance. According to the study authors, only 13% of post-marketing safety studies are completed between 3 and 5 years after FDA approval, and the FDA has never issued a warning letter, penalty, or fine against a manufacturer for noncompliance.
Getting these products to patients quickly may be laudable, but once they are in the wild manufacturers should not be left entirely to their own devices.
After this post was made, I heard from lead author Rita Redberg concerning some questions I had with the manuscript.
I was curious about the "denominator" for these approvals. The study looks at 78 device supplement approvals, but we are not told how many rejected applications there are. Her response: "FDA does not make available the number of applications they receive and do not approve... Anecdotally, I have heard it is about 80% approved".
Dr. Redberg also stated that she feels the current standard for approvals is relatively lax, citing a recent cluster of deaths associated with a rapidly-approved gastric balloon. She suggests that high-risk devices should face the same standard as drugs, where 2 preferably randomized clinical trials with meaningful endpoints are necessary for approval. Finally, she notes that post-marketing surveillance may not be the best solution to this problem (as I had suggested), as while drugs can be quickly pulled from the market, many devices are not easily removed from patients.