How does the peer-review process work? How do scientists critically review each others’ work to ensure that the most robust results and thorough analyses are published and that only the best research proposals are awarded grants? How do scientists’ papers and articles change between submission and publication? It’s a process that has advantages and shortcomings, and maybe it’s time for us as a community to try to improve it. (I’m not the only person who’s written about this stuff, and you may be interested in other scientists’ perspectives, such as Sarah Kendrew, Andrew Pontzen, and Kelle Cruz. This blog on Guardian has interesting relating posts too.)
For us scientists, writing about our scientific research and writing proposals for planned research is a critically important aspect of the job. The ubiquity of the “publish or perish” maxim highlights its significance for advancing one’s career. Publishing research and evaluating and responding to others’ publications are crucial for scientists to try to debate and eventually reach a consensus on particular issues. We want to make sure that we are learning something new and converging on important ideas and questions rather than being led astray by poorly vetted results. Therefore, we want to make the peer-review process as effective and efficient as possible.
For readers unfamiliar with the process, it basically goes like this: scientist Dr. A and her colleagues are working on a research project. They obtain a preliminary result—which may be a detection of something, the development of a new model, the refutation of a previously held assumption, etc.—which they test and hone until they have something they deem publishable. Then Dr. A’s group write a paper explaining the research they conducted (so that it potentially could be repeated by an independent group) and lay out their arguments and conclusions while putting them in the context of other scientists’ work. If they can put together a sufficiently high-quality paper, they then submit it to a journal. An independent “referee” then reviews the work and writes a report. (Science is an international enterprise, so like the World Cup, referees can come from around the world.) The paper goes through a few or many iterations between the authors and referee(s) until it is either rejected or accepted for publication, and these interactions may be facilitated by an editor. At that point, the paper is typeset and the authors and copy editors check that the proof is accurate, and then a couple months later the paper is published online and in print.
(In my fields, people commonly publish their work in the Astrophysical Journal, Monthly Notices of the Royal Astronomical Society, Astronomy & Astrophysics, Physical Review D, and many others, including Nature, where they publish the most controversial and provocative results, which sometimes turn out later to be wrong.)
In general, this system works rather well, but there are inherent problems to it. For example, authors are dependent on the whim of a single referee, and some referees do not spend enough time and effort when reviewing papers and writing reports for authors. On the other hand, sometimes authors do not write sufficiently clearly or do not sufficiently double-check all of their work before submitting a paper. Also, sometimes great papers can be delayed for long periods because of nitpicking or unpunctual referees, while other papers may appear about they were not subjected to much critical scrutiny, though these things are often subjective and depend on one’s perspective.
There are other questions that are worthwhile discussing and considering. For example, how should a scientific editor select an appropriate referee to review a particular paper? When should a referee choose to remain anonymous or not? How should authors, referees, and editors deal with language barriers? What criteria should we use for accepting or rejecting a paper, and in a dispute, when and in what way should an editor intervene?
Some authors post their papers online for the community on arXiv.org (the astronomy page is here) before publication while others wait until a paper is in press. It’s important to get results out to the community, especially for junior scientists early in their careers. The early online posting of papers can yield interesting discussions and helpful feedback which can improve the quality of a paper before it is published. On the other hand, some of these discussions can be premature; some papers evolve significantly and quantitative and qualitative conclusions can change while a paper is being revised in the referee process. It is easy to jump to conclusions or to waste time with a paper that still needs further revision and analysis or maybe even is fundamentally flawed. Of course, this can also be said about some published papers as well.
implications for science journalists
These issues are also important to consider when scientists and journalists communicate and when journalists write or present scientific achievements or discoveries. Everyone is pressed for time, and journalists are under pressure to write copy within strict deadlines, but it’s very important to critically review the relevant science whenever possible. Also, in my opinion, it’s a good idea for journalists to talk to a scientists colleagues and competitors to try to learn about multiple perspectives and to determine which issues might be contentious. We should also keep in mind that achievements and discoveries are rarely accomplished by a single person but by a collaboration and were made possible by the work of other scientists upon which they’ve built. (Isaac Newton once said, “If I have seen further it is by standing on the shoulders of giants.”)
Furthermore, while one might tweet about a new unpublished scientific result, for more investigative journalism, it’s better of course to avoid rushing the analysis. We all like to learn about and comment on that new scientific study that everyone’s talking about, but unfortunately people will generally pay most attention to what they hear first rather than retractions or corrections that might be issued later on. We’re living in a fast-paced society and there is often demand for a quick turnaround for “content”, but the scientific enterprise goes on for generations—a much longer time-scale than the meme of the week.
improving the peer-review process
And how can this peer-review system be improved? I’ve heard a variety of suggestions, some of which are probably worthwhile to experiment with. We could consider having more than one person review papers, with the extra referees providing an advisory role. We could consider paying scientists for fulfilling their refereeing duties. We could make it possible for the scientific to comment on papers on the arXiv (or VoxCharta or elsewhere), thus making these archives of papers and proceedings more like social media (or rather like a “social medium”, but I never hear anyone say that).
Another related issue is that of “open-access journals” as opposed to journals that have paywalls making papers inaccessible to people. Public access to scientific research is very important, and there are many advantages of promoting open journals and of scientists using them more often. Scientists (including me) should think more seriously about how we can move in that direction.