The Methods Man

View Original

Are Pre-Print Medical Research Articles Over-hyped?

“Spin” is higher on pre-print servers, at least among studies that never end up published.

Cards on the table. I’ve got mixed feelings about pre-print servers like medRxiv. On the surface, the ability to publish research findings prior to peer-review, especially when the results may have major impact for public health is laudable. But I always wonder if the quality of the research suffers for that lack of oversight.

On the other hand, peer review is painfully slow.

True story. I submitted the results of a randomized trial to a journal which shall remain nameless on July 8th, 2022. It was sent for peer review and I was asked to respond to reviewer comments on November 30th, 2022 – 145 days later. I responded to reviewers and resubmitted on January 9th, 2023. The resubmission was sent to reviewers and I was asked for further revisions on March 16th, 2023 – 66 days later. I responded to the reviewers and resubmitted again on March 23rd, 2023.  That was 19 days ago at the time of this writing.

I’ve heard no news since then.  From initial submission it has been 283 days at the same journal. That is more than nine months. I could have had a baby by now.

So, yes, the appeal of pre-print servers is huge.

But peer review is not just about mindlessly treading water waiting for reviewer comments, although sometimes it feels like that.  In fact, the comments the reviewers provided on my manuscript substantially strengthened it – they suggested new analyses that compliment the primary findings and, equally importantly, they forced me to describe my findings more impartially. In other words, they forced me to reduce my own “spin”. Without that check, are pre-print server manuscripts overhyped?

In the COVID era, preprint server usage – particularly MedRxiv which tends to publish clinical research studies – exploded. The public health need was clear – we needed data fast – not a year after initial submission. But what did we get for the bargain? Was the research of reasonable quality? Could it have been overspun?

We’re talking about a nice research letter from David Schriger and colleagues as UCLA appearing in JAMA. The goal of the analysis was to assess the level of “spin” on COVID-related randomized trials first published in MedRxiv compared to the final published versions of those articles.

Source: JAMA

They reviewed 236 pre-prints from January 2020 to December 2021 and it won’t surprise you to hear that by November 2022, 54 had not yet been published. Of those that were published, the median time from preprint submission to peer-reviewed publication was 134 days… but it’s ok I’m not jealous.

The authors basically identify three categories of pre-print abstracts.  Those that never got published in peer-reviewed journals, those that did eventually get published, and then the peer-reviewed versions of the abstracts.

The authors first scored the categories on abstract completeness. Did they provide the results for their primary outcome for instance? You can see across the board that, in general, preprints that never got published were less complete than those that did. For example, just 30% of the unpublished preprints provided their primary outcome results. 53.4% of abstracts that eventually got published had these results and then 57.8% of the final published articles had the,. So – ok – the pre-prints that will get published are pretty similar to the published but seriously?  40% of peer-reviewed published articles aren’t reporting their primary outcomes in the abstract?  This feels more like an indictment of peer-review than of pre-print servers.

But the real point here is to talk about spin. The authors devised their own system to look at spin – including things like highlighting positive secondary outcomes when the primary outcome was negative, and extending the claims beyond the target population of the study. Spin was much higher in the pre-print articles that never got published. Among those that did get published though, spin was a little better after peer-review – but not dramatically so.

When individual pre-print abstracts were compared to their peer-reviewed counterparts, in general the peer-reviewed abstracts were considered to have more completeness of scientific communication, and less spin overall. But see those big beige areas here? These are pairs of pre-print and published abstracts that were no different with regard to spin – the majority really. Again – to me this reads as more damning of peer review than pre-print servers.

Source: JAMA

Some caveats here. Randomized trials are, in my opinion, probably less subject to spin than observational research – given that with the latter authors have a lot more flexibility to pick and choose important outcomes and analyses. And COVID-19 is not necessarily a proxy for all clinical research.  Also – we know that the preprints that never got published were more highly spun to begin with. That may seem reassuring – but remember that when a manuscript appears on a pre-print server, we have no way of knowing whether it will be published in the future. We are flying blind.

It's for that reasons that the authors caution that “adoption of covid-19 treatment protocols based on erroneous preprints suggests potential problems associated with less complete, more highly spun abstracts”.

Indeed. But to me it seems that the difference between the good pre-prints and the bad pre-prints doesn’t lie in the peer review. If anything, maybe it lies with those poor associate editors at the medical journals – you know, the ones who reject the paper without even sending it out to peer-reviewers – often within a day or two. Could such a model apply to pre-print servers?  A simple, quick, perhaps unilateral “no” to separate the wheat from the chaff?  Or would that defeat the whole purpose?

Maybe I’ll write a manuscript about it.  You can find it in a well-respected, peer-reviewed journal sometime in the next 12 to 18 months. 

A version of this commentary first appeared on Medscape.com.