You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

A couple of interesting pieces have come recently to our attention that deal with the serious issue of “publication bias” in science.


In our Cato Working Paper No. 29, we describe “publication bias” as an underreporting of negative results from scientific experiments—a practice which can bias science in the direction of supporting existing hypotheses, even highly questionable ones.


In “Is the Government Buying Science or Support?” we lay out a framework for detecting whether federal funding of scientific research is perpetuating publication bias through expectations of results supporting existing federal science interests. We suspect it has.


A few years back, we looked into whether evidence of publication bias was present in the reporting of climate change results that were being published in the Science and Nature, the most influential science periodicals on earth. Not surprisingly, we found strong evidence that findings that anthropogenic climate change being “worse than expected” was disproportionately reported, compared to findings that climate change impacts would not as great as predicted. In an unbiased condition, each event should be equally likely. We concluded:

“This [finding] has considerable implications for the popular perception of global warming science, for the nature of “compendia” of climate change research, such as the reports of the United Nations’ Intergovernmental Panel on Climate change, and for the political process that uses those compendia as the basis for policy.”

Rather than being uncommon, publication bias is probably closer to the norm.


Two recent reports highlight its occurrence in two other major scientific research areas—the social cost of carbon and psychotherapy.


The social cost of carbon (SCC) is on its way to becoming perhaps the strongest influence on public policy both nationally and internationally. It represents the current value of the cost of all future damages wrought by the emission of each new ton of carbon dioxide into the atmosphere from human activities. It currently factors into virtually all newly‐​proposed regulations from the federal government and will no doubt be the center of focus at the international climate negotiation to take place in Paris this December. If the SCC is low (or negative) there is no need to regulation carbon dioxide emissions; the higher the SCC is, the greater the sense of urgency.


But nobody know what the true value the SCC is—and, if fact, it is probably unknowable. That doesn’t stop a whole lot of people from trying to assign a value to it. The more extreme you think climate change is going to be and the less confidence that you have in the ability of mankind to adapt to changing conditions, the higher your estimates of the SCC.


Since you probably only bother to study the SCC if you think there is going to be a major problem with climate change, SCC researchers are probably somewhat preconditioned to pursing methodologies which lead to high SCC estimates, and eschew those that don’t. This is a situation ripe for publication bias. And a research team led by Dr. Tomáš Havránek of the Institute of Economic Studies at Charles University in Prague set out to see if they could find one.


Yesiree.


From the abstract of their paper:

We examine potential selective reporting in the literature on the social cost of carbon (SCC) by conducting a meta‐​analysis of 809 estimates of the SCC reported in 101 studies. Our results indicate that estimates for which the 95% confidence interval includes zero are less likely to be reported than estimates excluding negative values of the SCC, which might [how about the word will—eds] create an upward bias in the literature. The evidence for selective reporting is stronger for studies published in peer‐​reviewed journals than for unpublished papers.

Additionally, and rather importantly, Havránek and colleagues also note that selective reporting likely plagues other aspects of the climate change literature which further leads to an inflated SCC estimates:

Moreover, other studies suggest that some of the parameters used for the calibration of integrated assessment models [used to determine the SCC], such as climate sensitivity or the elasticity of intertemporal substitution in consumption, are likely to be exaggerated themselves because of selective reporting, which might further contribute to the exaggeration of the SCC reported in individual studies—including the results of the [U.S.] Interagency Working Group [responsible for establishing a SCC value used in federal cost/​benefit analyses].

You really ought to have a look at the Havránek study in its entirety to see the nature and depth of the problem. It is available here.


Another example of publication bias infiltrating a major field of science—psychotherapy—has just been published by a team led by Ellen Driessen of VU University Amsterdam.


An excellent article in Vox by Julia Belluz laid out the implications of what Driessen and colleagues found:

For years, doctors have had two main strategies for treating depression: antidepressants and psychotherapy. These practices, according to the published research, can be fairly effective.


Or at least that’s what we thought. But recent research now suggests that we’ve actually been overestimating the effectiveness of our best treatments for depression — in part because published studies were giving a biased picture of the medical evidence.


The reason has to do with something called publication bias. Often there are lots of different scientists conducting studies on whether, say, a particular drug or therapy can alleviate the symptoms of depression. Not all of those studies, however, get published. Journal editors tend to be more interested in papers finding that a particular treatment had a big effect instead of studies showing little or no effect.

Driessen’s paper comes to this unsettling conclusion:

The efficacy of psychological interventions for depression has been overestimated in the published literature, just as it has been for pharmacotherapy. Both are efficacious but not to the extent that the published literature would suggest. Funding agencies and journals should archive both original protocols and raw data from treatment trials to allow the detection and correction of outcome reporting bias. Clinicians, guidelines developers, and decision makers should be aware that the published literature overestimates the effects of the predominant treatments for depression.

While few even realize it exists, publication bias has serious real‐​world implications, from environmental policy to mental health.


References:


Driessen, E., et al., 2015 Does Publication Bias Inflate the Apparent Efficacy of Psychological Treatment for Major Depressive Disorder? A Systematic Review and Meta‐​Analysis of US National Institutes of Health‐​Funded Trials. PLOS One. DOI: 10.1371/journal.pone.0137864


Havranek, T., et al., 2015. Selective Reporting and the Social Cost of Carbon. CERGI Working Paper Series No. 533. Center for Economic Research and Graduate Education—Economics Institute. Charles University, Prague. 42 pp.