“Studies have suggested that reviewers are likely to be influenced by a paper’s authors and…

“… top ML conferences are now turning to double-blind peer review, wherein a submitted paper’s authors are not revealed to reviewers. But that doesn’t mean double-blind peer review is a perfect solution …

The subsequent anecdote is hardly an indictment of double-blind peer review! It is only an indictment of ICLR 2019’s extension of that model to allow anonymous drive-by comments. (I believe that in some years of ICLR, drive-by commenters were not anonymous, presumably to prevent this kind of astroturfing.)

Normal double-blind peer review has been used in ML conferences and most other CS conferences for ages. It is generally considered to be a good thing, and has recently been tightened in the NLP community to discourage authors from leaking their identity via arXiv.

The article also does a poor job of summarizing Ghahramani’s 2009 proposal for a “market-based system” in which eligible reviewers decide for themselves which papers are worth their time to review. This should not be characterized as “conferences and journals should limit the number of submitted papers assigned to each reviewer,” which is already done, of course.

Tom Dietterich also posted an unrelated correction. It’s ironic that this piece about poorly vetted articles seems to be such an article itself. Perhaps tech journalism could benefit from some peer review?

read original article at https://medium.com/@eclecticos/studies-have-suggested-that-reviewers-are-likely-to-be-influenced-by-a-papers-authors-and-8195f87e3cc5?source=rss——artificial_intelligence-5