"The Real Problem is Reproducibility": Doc-to-Doc with Retraction Watch Co-Founder Ivan Oransky

Retractions in the scientific literature occur for a variety of reasons, ranging from benign error to truly malignant fraud. Whatever the cause, retractions are tracked by Retraction Watch, a blog that seeks to provide insight into the scientific method by investigating some of its most visible failures.  I had the opportunity to speak with Dr. Ivan Oransky, co-founder of retraction watch, in our Yale studio. The video of our interview appears below, followed by a transcript:

Perry Wilson: I'm joined today by Dr. Ivan Oransky, who is co-founder of Retraction Watch, an Editor-at-Large at MedPageToday.com, and Distinguished Writer in Residence at NYU. Dr. Oransky, thanks so much for coming today.

Dr. Oransky: Thanks for having me.

Perry Wilson: So Retraction Watch is a website that follows scientific literature that has been withdrawn, changed due to errors. Give us an example. What's one that sticks in your mind as sort of a classic Retraction Watch piece?

Dr. Oransky: Well, they do run the gamut. So you have stories that are about honest error. There was somebody who, early on, we've been around for about six and a half years now. So early on, there was someone who actually… someone in his lab ordered the wrong mice. This was a basic science lab. It was immunology work, and they realized this when they went forward with or tried to go forward with the next experiments, trying to extend the work and maybe reproduce it, and things like that, and the results just were very, very strange. And they realized that when you click on I think it was Jackson Labs or something like that, that it was one of these places where people order transgenic mice.

            When you click on it, it's very easy to have C6, B7- and there's a plus, but you don't really see it, and so honest error. They felt pretty terrible about it for many reasons, which you can understand. So that's the kind of example of an honest error. We like to not so much promote honest error, but promote it when people do the right thing. That actually is a category on the site. So there's those sorts of cases. That's about 20%.

            But then you look at some of the really bad frauds, where you have people making up entire datasets. You have people making up entire patients. So there was a pretty famous case, which we didn't cover at Refraction Watch because it happened before we were born. But Adam Marcus, who's my co-founder, he used to be the editor-in-chief of Anesthesiology News, and so he knows a lot about anesthesiology and talks to people at those journals all the time.

            There was someone named Scott Rueben, who it turned out he was working on Celebrex. So this is going back a bit now. But he, it turned out, had made up out of whole cloth all of the patients in his studies. So there's a sort of dry, cynical way to think about that as a funny thing. “Oh, no patients were harmed in this experiment,” sort of like at the end of the movies with animals, but that's pretty serious.

Perry Wilson: Yeah.

Dr. Oransky: These were not the core Celebrex experiments. I want to be clear about that, but they were pretty important. He was a pretty leading anesthesiology researcher. He ended up going to prison, to federal prison, for healthcare fraud, which is unusual, but happens.

            So if I would sort of paint the spectrum a little bit, there's a lot in between those two. But between honest error and complete and utter fraud leading to a prison sentence, that's kind of the range we're talking about here.

Perry Wilson: So I wonder why you think your job exists. In other words, why is this happening? What are the incentives that are pushing people to make, in the fraudulent cases, errors… honest errors happen. The fraudulent cases. Why? Is it money? What is it?

Dr. Oransky: There's some evidence now that, in fact, misconduct is on the rise. So for a long time, we sort of didn't have any evidence of that. Somebody actually went through and looked at 20,000 papers, and this was a gargantuan task. Her name is Elise Bik. She was formally at Stanford. She went through 20,000 papers and found inappropriate image manipulation, image duplication, actually, in something like 4% of them. That had gone up. So what you're looking at here is pressure to publish.

Perry Wilson: Yeah.

Dr. Oransky: So we all know that the only way to get ahead in academia is to publish. It's the old publish or perish. So people, they do what they have to do in order to publish, particularly in the high-impact factor journals, in the highly ranked journals.

            If you look at those journals, they actually have a higher rate of retraction than others. Now that could be interpreted a lot of different ways, and I still think that a lot of that is because there just weren't eyeballs on it. Let's face it. More people are reading The New England Journal of Medicine than the Wilson Journal of Immunology or Nephrology. I mean all respect to The Wilson Journal.

Perry Wilson: But yet to publish. But, but any day now.

Dr. Oransky: Well, that's… then you can't have retracted anything either, so you're good.

Perry Wilson: Right. That's right. Zero percent.

Dr. Oransky: Exactly. So, and again, there's probably a screening effect there in terms of more people looking at them. But it's become fairly clear that the system we have in terms of tenure, in terms of promotion, in terms of grants, that overly rewards and really almost fetishizes publishing in these big, impact-factor journals.

Perry Wilson: Yeah.

Dr. Oransky: That you can't help but see how that's contributing to the problem.

Perry Wilson: So I want to close out on… well, I don't know if this note is going to be optimistic or pessimistic, so you tell me. The rate of retractions is going up. What's the state of the medical literature right now? Are retractions just the tip of the iceberg? Are we in trouble? Is most of the literature out there good? How should we look at it en masse?

Dr. Oransky: So I think that retract… I always say that retractions are… about the only thing you can really tell from retractions is anything you want to know about retractions. So what the rate of retractions tells you is what the rate of retractions is. A lot of people try to extrapolate from that and sometimes they have agendas where they use this information, and they're free to do it. It just may not really sort of be consistent with the data.

            But I think when people take a step back… and we try and do this as much as we can. The real problem in the scientific literature and in the medical and clinical literature is not retractions. It's not fraud. That is a big problem. I'm certainly not condoning it, and anything we can do to foster a conversation where we look at the whole picture, that's what we're really interested in.

            But you're talking about a .02% rate of retractions, and maybe that number should be much higher than that, but should it be 1% of the literature? Even if it's 5% of the literature, which it probably shouldn't be, that's still a minority. The real problem is reproducibility. Sometimes people, I think, get confused, which I can understand. They think that failure to reproduce is because of fraud. Only a very small percentage as far as we can tell, and that's… I think that's legitimate. That's true.

            So the real state of the literature is that we're not very interested or at least I should say scientists and researchers aren't all that interested and invested in correcting it. Science is wonderful when it works well. It is the best way to learn anything and gain knowledge. I'm a huge proponent of that, of the scientific method. What I'm not necessarily a proponent of is the sort of approach to scientific self-correction that a lot of researchers and journals, and others have taken and until we get our heads around that.

            So I think some of the most progressive ideas and some of the most interesting and forward-thinking in terms of being optimistic, in terms of in this system, are systems that look at post-publication peer review. So sites like PubMed Commons, PubPeer.com, which is worth checking out if people haven't looked at that yet. People are actually having conversations and critiquing papers right there. It's live. It's in real time. It's in public. It's not… a journal club is great and everybody rips up a paper, but then you finish your coffee, your donuts, and you sort of leave the room and that's it.

Perry Wilson: Yeah.

Dr. Oransky: Well, this is an online journal club, basically, for all the world's literature. I actually hope that more clinical researchers will pick up and start posting on PubPeer and PubMed Commons. Most of what's there now is actually basic science, and that's great, and I want them to continue and grow, too. But clinical research, when I talk about PubPeer with clinical researchers, most have never heard of it. And that's fine. So if I can proselytize a little bit, great. We don't get any funding from them or anything else, but we think they're really important. So that, to us, is the way… is one of the ways forward.

Perry Wilson: Dr. Oransky, thanks so much.

Dr. Oransky: Thank you very much, Perry.