Here’s a headline from today’s Washington Post: “Sexism in science: Peer editor tells female researchers their study needs a male author.” Peer review is the usually-anonymous process by which articles submitted to academic journals are reviewed for quality and relevance to determine whether or not they will be published. Over the past several years, numerous scandals have emerged, made possible by the anonymity at the heart of that process.


The justification for anonymity is that it is supposed to allow reviewers to write more freely than if they were forced to place their names on their reviews. But scientists are increasingly admitting, and the public is increasingly noticing, that the process is… imperfect. As the Guardian newspaper wrote last summer about a leading journal, Nature:

Nature […] has had to retract two papers it published in January after mistakes were spotted in the figures, some of the methods descriptions were found to be plagiarised and early attempts to replicate the work failed. This is the second time in recent weeks that the God-like omniscience that non-scientists often attribute to scientific journals was found to be exaggerated.

In the 1990s I sat on the peer review board of an academic journal and over the years I have occasionally submitted to and been published by such journals. Peer reviews vary wildly in depth and quality. Some reviewers appear to have only skimmed the submitted paper, while others have clearly read it carefully. Some reviewers understand the submissions fully, others don’t. Some double-check numbers and sources. Others don’t. It’s plausible that this variability (particularly on the weak end) is a side-effect of reviwers’ anonymity. I have seen terse, badly-argued reviews to which I doubt the reviewer would have voluntarily attached his or her name. Personally, I try never to write anything as a peer reviewer which I would not happily sign.

Six years ago, that inspired an idea: it occurred to me to found a journal, called Litmus, that would be comprised of signed peer reviews of already published papers, with authors’ responses when possible. My impression is that this would lead to a much higher average quality of reviews, and reveal to readers the extent of disagreement among scholars on the issues discussed, alternative evidence, etc.


Alas, it would also be potentially dangerous for young scholars to contribute to such a journal, were they to rub a potential employer the wrong way. In the end, I was unable to interest enough top-notch scholars to flesh out a sufficiently large editorial board. One professor declined, saying:

This strikes me as an interesting idea, but one that is sufficiently outside of what is normal that you might have quite a difficult time getting a consensus that would lead to participation high enough to sustain the journal. Some people would probably feel that signed reviews were not of the same quality as blind ones. Others would feel that signed reviews required formality so much beyond that of blind reviews (which at their best are candid and informal but accurate) that they would be unwilling to participate for lack of time. I am not saying that it is a bad idea, but I think that you’re in for an uphill battle to get the idea off the ground.

Eventually I abandoned the project. But as the failure of the status quo in journal peer reviewing becomes more evident, perhaps someone will rekindle the idea. Conventional journals would have to be on their toes if they knew there was a chance their articles would be held publicly under a microscope by other reviewers.