For those of my readers who are not in the academia, “peer-review process” can be a bit of a weird and barbaric name. But behind this name is hidden a capital (in my opinion and one of many researchers) part of the making of new knowledge.

When researchers work on a new idea, they create theories, make hypotheses, perform experiments and write a paper about it. This is valid across all fields of Science. To be part of the scientific corpus, it is usually admitted that the paper then needs to pass the peer-review process. It means that it will be read by experts in the field that will be able to assess the quality of the paper. They will make sure that the paper is correct (mathematical demonstration correct, experiments making sense…), that the claims are supported, and that the state of the literature is good1 and the contribution novel. After the paper has passed the peer-review process, it is accepted for publication. The format of the publication depends a lot of the field. In Computer Science, a very important part of the publication is done in conferences while in other fields conferences are more limited. In any case, as soon as the paper is published, it is part of the scientific corpus.

Is this a flawless process? No, of course not. And for various reasons which may vary from field to field. For this reason, from now on, I will focus mostly on the Artificial Intelligence field.

Single vs Double Blind

In Artificial Intelligence, there are two types of review: the single-blind and double-blind. In the single-blind review, the reviewers know the names of the authors of the paper and their affiliation. In double-blind, the authors are anonymous. In both cases, the authors know the names of the reviewers.

Single-blind reviews tend to favor well-known and established teams and researchers. Indeed, few reviewers would cast doubt on a paper written by the leading team in the domain, even if the quality isn’t there in this case2. In addition, a “wrong” author name3 can be discriminated easily on single-blind reviews, often subconsciously.

Double-blind reviews are supposed to avoid this kind of problem by making the authors anonymous. However, let’s be honest here. In a lot of fields, researchers are working in very precise parts of the field and there are not so many experts capable of reviewing their work. When you have been long enough in the circuit, you know who is working on what and it is sometimes feasible to “break” the double-blind and guess which team produced which paper. However, it has been shown that double-blind reviews reduce discriminations4

A flawed process

So, double-blind is good. Why is the process flawed then?

First of all, peer-review and publication are long… very long… months happen between the moment the authors send a paper for peer-review and when they get the reviews 5 and Artificial Intelligence progresses fast. Very fast. And you know what, usually researchers keep working on their projects and topics in between! Which means that when the paper is actually published, it is most probably already obsolete, which can be frustrating.

Second, as I said, peer-reviewed should be the first gatekeeper of Science, make sure that the papers which pass it are correct. So how comes there are regularly retracted papers and papers containing so many mistakes? Because reviewers and journals are not flawless and overloaded with work and request. It is important to know that reviewers work on a submitted paper voluntarily: they are not paid for this work. And this task adds up all the others they have to do, such as teaching, supervising students, administrative work and maybe some research if they have time.  The more active in the community a researcher is, the more reviews he will be asked. Doing a review properly is not just reading the paper and say yes or no. A good reviewer needs to ensure that the paper is well explained ; they check the references provided in the paper, to see if they are thorough and if the authors are aware of all the important work already done in the field ; they need to check carefully the proofs and theorems if any and, optimally, re-do the proof to ensure it is without mistake ; they should be able to check the raw data and the algorithms to reproduce the experiments and check that the results are correct… A good review takes hours to work, and often the reviewers do not have access to all they need (such as the raw data or code which are usually not included when submitting a paper). For all these reasons, it happens more often that we would want that reviewers are doing their reviews lightly, checking roughly the paper for obvious mistakes but not considering it with the appropriate attention to details. Often, it’s not such a big deal since this level allows to distinguish between obviously flawed papers and paper worth being published. However, it happens that sometimes, one detail makes the difference between a good and a bad paper, and the reviewers might miss it.

There is a second aspect of reviewing that I didn’t really mention: the feedback. Writing a paper and getting reviews for it is one of the (rare) ways to get feedback about your work from outside your lab and co-authors6. Reviews should always aim at making the paper and the work behind better. However, very often, reviews are useless from this point of view. For the reasons I explained previously, a lot of reviewers do not take the time to provide meaningful feedback for the authors, which impact the whole scientific process as it becomes more complicated to discuss your work to make it better.

Isn’t there any hope?

So, we saw it, the review process is far from being perfect. This is a known problem and researchers are talking about it and trying to find solutions. However, all is not bad in my opinion. The despite its flaws, the peer-review process still manages to sort very bad paper out of the scientific corpus. It still has a role of gate-keeper… It just tends to be a very slow old and slow gatekeeper instead of Cerberus, but it still does some work.

Some alternatives are suggested to replace this old gatekeeper. For instance, some journals choose post-publication reviews: the first reviews only aim at checking that the paper is sound and not flawed7, and the community reviews and discuss the paper after it is published to see if it is good. Others offer rating at each revision instead of the classic accept/reject and the authors can then decide whether they want to work more to improve their rating or if they are satisfied with the current state.

All these alternative are still new and have been suggested by leading journals (PLOS One, Nature…). It will take time to see whether they can be a viable alternative to the classic review process.

Additional Content:


Featured Image by GollyGForce, CC-BY 2.0

  1. i.e. that the authors know well enough what has been done before on the topic
  2. And it happens! It doesn’t mean that the team or researcher is not good anymore, we all have our bad days… and bad papers :D
  3. A female name or a name sounding like coming “from the wrong country” for instance
  5. Regardless if the paper has been accepted for publication or not
  6. Another way being presenting said paper at a conference and get feedback from the audience
  7. which is basically making official what’s happening de facto
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.