Recently, I have submitted four papers to a conference with different co-authors. After the peer review process, one was accepted for a talk, the three others for a poster. I do not copy all the reports here because it would be boring, but just the marks we received to the question “is this worth a talk”, ranging from +3 to -3. You will see a pattern emerge.
Paper 1 (the one that was accepted) had three reviewers: marks 3, 2 and 1
Paper 2 had three reviewers: 2, 0, -2
Paper 3 had two reviewers: 1, -2
Paper 4 had two reviewers: -3, 2 (the first reviewer, having noticed a few typos, mentioned “poor right up” [sic] as one of the reasons not to consider our submission).
Do you see the pattern? No? Look more closely… YES, you have got it: peer reviewing is random number generation 😉
(1) With little post-processing, any correlation with the content of the paper can be removed for papers 2-4.
(2) Paper 1 is special, not because there is no spread, but because the average is not centered around 0. This bias is robust and can be eliminated only by suppressing buzzwords.