Jay Greene and Eduwonkette—an anonymous education blogger whom Greene thinks is married to Eduwonk but I suspect is the original Wonkette’s kindergarten teacher—are having a tiff about the supposed superiority of peer-reviewed papers over think-tank reports. Unfortunately, Eduwonkette trots out the old saw that you can’t trust think tank reports because most think tanks have “stated ideological” agendas.


This ignore-the-report-because-of-the-messenger thing is getting pretty tiresome. Greene’s colleague Greg Forster has dealt with the phenomenon before, as has Cato’s Andrew Coulson and former AEI president Christopher DeMuth—but it’ll probably never go away. People will always dismiss the work of those who are upfront about their convictions in favor of those who are supposedly “objective.” But this is too often a sad excuse to ignore the merits of what the intellectually transparent have to say, and worse, it puts on blinders to the reality that all people are to some degree self-interested and, hence, biased.


In a stroke of serendipity, it just so happens that Inside Higher Ed reported yesterday on a new study finding that peer-reviewed research is often fraught with citation errors; so much for the assumption that “peer review” is synonymous with “quality.” Making matters worse, Inside Higher Ed notes, these errors are heaped on top of the “well-documented” presence of bias in academic research that emphasizes evidence supporting authors’ points of view, that includes citations intended to curry favor with influential colleagues, or that plays down contrary evidence:

Like any self-enclosed, loosely policed network, citations are far from perfect. It’s well documented, for example, that researchers tend to cite papers that support their conclusions and downplay or ignore work that calls them into question. Scholars also have ambitions and reputations, so it’s not surprising to hear that they might weave in a few citations to articles written by colleagues they’re trying to impress — or fail to cite work by competitors. Maybe they overlook research written in other languages, or aren’t familiar with relevant work in a related but different field, or spelled an author’s name wrong, or listed the wrong journal.


All of these shortcomings are reviewed and discussed in an article published this year in the management science journal Interfaces along with the critical responses to it.


As it turns out, scholars have already done some work quantifying problem citations, divided into two categories, “incorrect references” and “quotation errors.” The authors of the paper, J. Scott Armstrong of the University of Pennsylvania’s Wharton School and Malcolm Wright of the Ehrenberg-Bass Institute at the University of South Australia, Adelaide, write of the former type, “This problem has been extensively studied in the health literature … 31 percent of the references in public health journals contained errors, and three percent of these were so severe that the referenced material could not be located.”

In the end, all research must be seriously scrutinized, and this will only be done when we accept that everyone has biases and we take every report, paper, or pronouncement with a healthy grain of salt.