Imagine a software company that solicits user feedback with, “Please let us know what does and does not work in the current release and what you would like to see in the future. However, keep in mind that we will not be making any updates to our products and the version you have is the final one.” This is the state of post-publication peer review today. We ask scientists to comment on static, final, published versions of papers, with virtually no potential to improve the articles. We ask scientists to waste their time and then take the lack of participation as evidence against post-publication peer review.
For two years now, I have heard the argument that efforts to encourage post-publication commentary have failed and therefore cannot succeed in the future. This is the classic “has not worked so far and therefore never will” mentality (just as people tell me that lack of mobile devices in the lab right now is proof that scientists will not use phones and tablets for research in the future). Proponents of post-publication versus pre-publication review are still viewed as the crazy fringe that is about to derail all that is good about science publishing.
Much has been written on the failure of the current publishing model in science12345678910. I want to focus here on the ways to incentivize post-publication peer review, and specifically, on ways to incentivize constructive criticism. By far, the best demonstration of the power and potential of post-pub review is the PubPeer website. Not only did PubPeer succeed where most journals failed – encouraging comments after publication – but the comments on their site have led to a number of high profile retractions. PubPeer is a clear demonstration of the power to catch problems that pre-publication peer review is simply incapable of flagging.
A common criticism of PubPeer is that the comments are overwhelmingly negative. However, as should be clear from the above, this is not a problem of PubPeer; this is the fault of our publishing structure. Scientists are already sleep-deprived and overwhelmed with workload. Why spend time commenting on a paper if the paper won’t improve? Naturally, the comments are those that are likely to lead to a retraction or a major correction, as these are effectively the only actions that can still be applied to a published manuscript.
The good news is that the solutions to this are already live. The journal F1000Research has broken ground with support for versions of manuscripts. Authors can return after publication and edit their papers with clearly-tracked and stamped versions (example here). This is a big deal. I have gotten countless e-mails after publishing my papers with important references that I missed and with great questions that suggested easy clarifications to the manuscripts to strengthen them. If only I could easily edit and improve my papers! As more publishers enable versioning, the incentive to provide constructive rather than destructive feedback on PubPeer will increase exponentially.
Luckily, you don’t have to wait a decade or two for other publishers to catch up to F1000 and enable versioning. Just submit your manuscript to a preprint server like bioRxiv or arXiv before you send it to a journal, and then solicit reviews from scientists and encourage them to publish these on PubPeer. Both bioRxiv and arXiv have versioning, and you can continue to improve your paper even after it is published in a traditional journal.
There are many great reasons to deposit to a pre-print server, but even if you don’t, or for papers you have previously published, you can easily contribute to productive and constructive post-publication commentary. We are constantly answering questions about our publications at seminars, conferences, and via e-mail. These discussions are so helpful if made public. By e-mail, you answer to one person, but on PubPeer, you answer 1,000. You can quickly make an FAQ section on your paper based on the questions you commonly get, or you can copy-paste entire e-mail threads (I have done just that on my recent publication). Ideally, in the future, these discussions will happen directly on PubPeer instead of privately by e-mail.
Finally, there are thousands of journal clubs happening each week with deep and careful discussions of papers. This is post-publication peer review! You spend hours preparing to present the paper. You have concerns, questions, and positive feedback. Why not share it openly or anonymously on PubPeer? After all, PubPeer calls itself the “online journal club”. PubPeer engages the authors for you so you can get clarifications and additional information. You can help other scholars interested in this work. You can help the authors to improve the understanding of their work, and if they published on a pre-print server or with a journal that has versions, possibly help the authors to improve the manuscript itself.
There is no reason to wait for publishers to innovate. With the exception of a few, innovation is neither the forte nor the goal of the publishers. As scientists, with just a few minutes of our time, we can contribute to the online annotation and discussion of published research already. We can push for constructive post-publication discussions and peer review as authors and readers. The tools are at our disposal. Let’s use them. Let’s elevate the tone of the commentary and let’s comment on the vast majority of papers that are good and not headed for Retraction Watch. If we make an effort as scientists now, we will validate the post-publication peer review naturally and will lead to a healthier scientific publishing and discourse.
—————————-
[Because I am a fan of making frequently asked questions public, below are the common motifs in defending the status quo of peer review and publishing.]
- Pre-pub peer review improves papers
This seems obvious on the surface. Certainly all of my manuscripts improved thanks to peer review. But by how much? And at what cost (see here and here)? Is the 9-month average delay helping or hurting science?
I am a strong advocate for academic peer review. I don’t know anyone who argues against improving papers and quality of science through review. But why pre-publication? I think the current system does more harm than good to the quality of science. Just consider the fact that the paper does not see the light of day until the reviewers and the editor have been satisfied. This is a ludicrous level of pressure; pressure not to improve but to provide the desired results. Not only does this contribute to outright fraud, but as Arjun Raj points out, this constantly leads to inadvertent bad science.
And any good from pre-pub peer review will still be there with post-publication peer review. In fact, papers will improve more rapidly and will gain more reviews with the post-pub structure with versions. We’ll have higher quality manuscripts, faster, with fewer retractions, and fewer dubious results.
- The current system is stretched and has weaknesses, but is it really broken? Good science still gets published.
It’s broken beyond repair. The average 9-month delay from the moment of submission to the publication is inexcusable with our current tools. We are publishing the same way Gregor Mendel did, despite the advent of computers, internet, social networks, and mobile devices. Good science gets published despite, not because of the current system. The current publishing structure is pushing people out of science. It is demoralizing, exhausting, and destructive.
Professor at University of Washington, in response to the invitation to share a story about published research on our PubChase Essays: “Thanks for the invitation to share a story [about my research]. I’ll see if I can come up with one, but to be honest, publishing has become such a war that once a paper is out, I think I try to forget the details of what went on as soon as I can.”
Professor at Brandeis, also in response to the story request: “Not sure what dirt I want to dish about my papers. I could tell how it took 6 tries to publish our recent manuscript or how other of our most frequently cited papers were rejected from various journals.”
Professor Arjun Raj: “For myself, I can say that by the time a paper comes out, I usually never want to see it again. The process just takes so long and is so painful that all the joy has long since been squeezed out of the paper itself.”
- Pre-pub is a filter so we don’t have to read crap. Too much is published already.
It’s a bad filter. It approves bad papers and rejects good ones.
Yes, the volume of publications is overwhelming. Over 100,000 papers are deposited into PubMed each month. The solution isn’t to reject more. That just leads to delays. Most rejected papers are still published, just with a delay. Rejections are often random. And how much more would we have to reject to make the information flow manageable? With internet and today’s technology, we now have better filtering than 300-400 years ago. The solution is to improve tools like PubChase that solve the problem of discovery via personalized recommendations. And, of course, post-pub review can serve as the same filter, only faster and better.
- Peer review is f***ed up – let’s fix it [↩]
- Stop deifying “peer review” of journal publications [↩]
- The Seer of Science Publishing [↩]
- End the wasteful tyranny of reviewer experiments [↩]
- Is peer review broken? [↩]
- I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals [↩]
- The Cost Of The Rejection-Resubmission Cycle [↩]
- The gift that keeps on giving [↩]
- The magical results of reviewer experiments [↩]
- Dear Academia, I loved you, but I’m leaving you. This relationship is hurting me. [↩]
You’ve put into words one of the main reasons why no one comments - commenting doesn’t do anything unless it causes a retraction, so any criticism tends to make authors worried. I think versioning is a great idea and now that Pubmed has comments, it’s easy to point to the updated version of your article in a repository such as BioArxiv.
Pingback: Weekend reads: How to rescue science, what “censorship” really means, worst paper of the year? | Retraction Watch
Agreed on all points. Part of the problem is who is doing the reviewing. It’s usually a bunch of junior faculty, who are under a lot of stress and don’t have the same perspective as more senior faculty, and so give much harsher reviews. Editors say this as well. The problem is that they are typically the ones doing the reviewing, though, because senior faculty decline or farm it out to a student or postdoc, which can be even worse.
For the time being, we have to work with the current system. Here are some thoughts on how to make that more productive:
http://rajlaboratory.blogspot.com/2014/04/how-to-review-paper.html
That is a terrific set of guidelines for improving the reviews!
I also want to point out that a switch to post-publication peer review will automatically lead to better reviews. The paper is already published, so you are not demanding experiments to make it worthy of publishing. And of course, in post-pub, the review is independent of journal considerations.
Very timely post! I have two comments:
1) “The average 9-month delay from the moment of submission to the publication” reflects only successful submissions. If we to account for papers that often go through several rounds of rejections, including rejections by the same journal that eventually accepts it (the dreaded “rejected with an opportunity to resubmit”), then the average delay would be well over a year.
Just to illustrate this delay with a personal example: over 20 months, I first submitted my meta-analytic study showing that rising CO2 significantly alters crop quality. If versioning and pre-acceptance publication was permitted, the paper could have altered some conclusions in the new IPCC report. Instead, the IPCC relies on my preliminary 2002 paper and some contradictory studies. Thus, an opportunity is missed to present this issue fully in the new report.
2) For versioning to really take off, the mechanisms need to be in place to prevent citation fragmentation. Since much of the impact of the paper, for better or worse, is judged by the number of its citations, it is important for all citations to various preprints and versions to be summed up. Otherwise, the citation fragmentation might discourage authors to version their papers.
[I asked Rebecca Lawerence, managing director of F1000 Research about versioning.]
Dear Lenny,
In answer to that comment, Scopus for example have agreed that citations to all article versions in F1000Research will be summed up, as the commenter suggests. We have used versioned DOIs to make this easier for them to do.
Best wishes
Rebecca
That’s good to know! Thanks for advancing the issue. I checked out how F1000 does versioning and really like their implementation. A bit more challenging issue faces BioRxiv: pre-print and post-publication versions of the same paper have two entirely different DOIs (one from BioRxiv and the other from the hosting journal). I am not sure how citation databases will go about linking the two DOIs, but nothing here is unfeasible.
There’s a saying among software developers that the only code that doesn’t have to be maintained is code that’s headed for the dustbin. I think that this is a nice metaphor for my ONLY objection to versioning: will papers then become something that have to be indefinitely maintained, like software?
Before I get flamed, I’m an advocate of post-pub peer review, and I think that something like versioning will eventually become standard, it’s just important to define upfront what the parameters are. Should authors be expected to continue to respond as other papers come out, and their models are invalidated (as all models are)? Or are they only expected to fix “mistakes”?
To varying extents, all of this applies to pre-pub peer review as well, and perhaps the “final” version of a paper may still appear faster than it would in a traditional journal. There’s disagreement even now about whether papers should be retracted for being “wrong” (most papers are wrong, at some level).
I think we all have a sense - intuitively - of what peer review (pre or post) is supposed to accomplish … post-pub has the opportunity to get us closer to that goal but I’m not sure we as a community have quite articulated it yet.
Good comment Casey. I think it should be up to the authors to improve and correct. People leave and switch to new projects. End of versioning/maintenance will happen very naturally. I am not arguing for placing a responsibility on the authors to keep their papers current and to respond to all new publications. The problem is that today, even if I want to improve my paper as an author (with even a trivial modification like addition of a citation that I missed), there is simply no infrastructure for doing that.
I think good examples of systems allowing for both limited and perpetual versioning already exist. For example, F1000 versioning appears to be limited to the authors of a paper. Hence, it is unlikely that it would span perpetual maintenance. On the other hand, Wiki-to-Publish model advocated by PLoSWIki is conducive to perpetual maintenance because anybody (and not only the original authors) can do updates.
[Comments from Mark Johnston, Editor-in-Chief of Genetics]
1. It seems to me that what you propose (post-pub peer review and ms. revision in response) is more appropriate for pre-publication. We’ve always done that (send preprints to colleagues for their input), and things like arXiv are good because they make it easier to ask for input and potentially expand the community from which input is obtained. BUT, I think extending that to post-pub is undesirable because…………..
2………….who has time for that?? At some point we (scientists) have to put the story out there and get on to the next question. If we linger too long over our last story it suggests to me that we might not have much of a next question to address…………..
3. Historically science has achieved what you suggest because published papers stimulate others to explore the subject, which (almost always) modifies the previous findings of others, and so on, ad infinitum. The edifice is built brick-by-brick. I think that’s a better way to advance knowledge than fussing over how a completed story is presented after it’s been presented. Give it your best shot, then get on with your science.
Regarding preprints - this is exactly what I have been telling people - it’s not that radical. We do it all the time already.
On pre- versus post- publication review, I agree that we do not want infinite obsession with a given paper. There are certainly many cases where simple mistakes in strain names, primers, or even figures can be quickly fixed with a great benefit to the scientists following up on the work. However, this is far from the biggest benefit of versions and post-publication peer review. I don’t care about perfect papers. If I did, I would have never published, and certainly not nearly as many papers as I have.
But I care about and will be investing energy into promoting post-pub peer review as much as I can. I envision a simple model - put your paper on bioRxiv, solicit reviews, then contact individual journals, forwarding the reviews (hopefully many of them open reviews) along. The real benefits are:
1. If you are reviewing a bioRxiv preprint, it is already public. If it’s already public, you are not requesting “reviewer experiments” before the paper sees the light of day. The tone of the review should shift from destructive to more constructive.
2. The 1+ year delay in getting the paper out is gone. The quality of the pre-print is, of course, below that of the ultimate version in the journal, but the work is out.
3. Review happening on the preprint on bioRxiv is independent of the journal. So impact/sexyness is not part of the review.
4. Reviews are not discarded when a paper is rejected, with wasteful cycles of researchers reviewing and making the same comments on the same paper at different journals.
5. The importance of the impact factor is greatly undermined because it is not the stamp of approval of the journal but the actual peer reviews that determine which paper can and cannot be trusted and which is important.
6. The brick-by-brick progress of science is accelerated if the results of followup work, questions, and corrections become publicly visible in discussions of a given paper.
“PubPeer succeed(ed) where most journals failed…”
While PubPeer has definitely had its success stories, the site is just a little over a year old, if I’m not mistaken. We’ve seen other efforts like PubPeer that have already come and gone. I mention some here: http://www.cell.com/neuron/abstract/S0896-6273%2814%2900288-8
Great post! A very passionate and thorough explanation of what is wrong and how to fix it.
One could perhaps broaden the scope of the issues: clearly, what we need is an ecosystem of services, as your arguments apply equally to code and data. However, neither for code nor for data do we even have a publication infrastructure, let alone a commenting, versioning, improvement infrastructure!
What we need is an ecosystem in which we can see who is contributing to science in a constructive way, either by making comments, or by heeding comments and improving their science. What about scientists who receive a lot of comments but never improve their science? What about scientists who never comment? This is like teaching and reviewing: all valuable contributions that don’t factor in how we promote scientists through the ranks.
It’s not only peer-review that’s broken beyond repair: it’s our entire way of doing science. Priorities are off, the infrastructure’s from the digital stone-age and as a consequence the public trust in science is rapidly eroding.
Obviously, your post would have been a lot longer if you had included all that, but fixing peer-review is important, but only one tiny piece of a huge puzzle we need to put together.
OK, so imagine we do switch to a world where papers are continually improved in response to careful criticism.
Imagine telling all those “sleep-deprived and overwhelmed” scientists that the paper they spent a few hours reading and understanding two years ago has changed, and they ought to check back to see what’s different. Now imagine telling them that in fact everything they’ve read in the last five years has changed, often trivially but sometimes in crucial ways.
Given that we’re imagining a ‘publish then review’ world, the researcher must read and understand a constantly changing torrent of literature. It would be completely impossible to keep up, even with focusing on just the very best papers. That’s why the version of record is so important - you read it, understand it, and it’s still the same paper when you come back to it again. Having every paper constantly change would be both bewildering and utterly demoralizing.
Your concerns are shared by several commenters above. Please see my replies:
http://blog.pubchase.com/we-can-fix-peer-review-now/#comment-6188
http://blog.pubchase.com/we-can-fix-peer-review-now/#comment-6183
My biggest worry is that the current pre-publication peer review system places an enormous pressure to provide the positive results the reviewers and editor have requested. In some fields, it’s so bad that you just cannot trust published literature.
http://anothersb.blogspot.com/2014/04/dear-academia-i-loved-you-but-im.html
I’d rather be overwhelmed with corrections for a given paper than read it, understand it, and not trust it. Besides, the original version is still there. Oh, and with PubChase, you don’t have to “check back”. The comments and corrections come to you.
http://blog.pubchase.com/knowledge-should-come-to-you/
We have the technology to make scientists’ lives easier, and coupled with post-publication peer review, we can improve science publishing, understanding of the publications, and the progress of research.
Pingback: My best publishing experience | Forest Vista
Pingback: The Ethics of Peer Review in the Age of Adjunctification | N = 1