While doing some historical studies in preparation for an article in Cato’s Regulation magazine, we found that we once discovered the information equivalent of antimatter, namely, “anti-information”.


This breakthrough came when we were reviewing the first “National Assessment” of climate change impacts in the United States in the 21st century, published by the U.S. Global Change Research Program (USGCRP) in 2000. The Assessments are mandated by the Global Change Research Act of 1990. According to that law, they are, among other things, for “the Environmental Protection Agency for use in the formulation of a coordinated national policy on global climate change…”


One cannot project future climate without some type of model for what it will be. In this case, the USGCRP examined a suite of nine climate models and selected two for the Assessment. One was the Canadian Climate Model, which forecast the most extreme warming for the 21st century of all models, and the other was from the Hadley Center at the U.K Met Office, which predicted the greatest changes in precipitation.


We thought this odd and were told by the USGCRP that they wanted to examine the plausible limits of climate change. Fair enough, we said, but we also noted that there was no test of whether the models could simulate even of the most rudimentary climate behavior in past (20th) century.


So, we tested them on ten-year running means of annual temperature over the lower 48 states.


One standard method used to determine the utility of a model is to compare the “residuals”, or the differences between what is predicted and what is observed, to the original data. Specifically, if the variability of the residuals is less than that of the raw data, then the model has explained a portion of the behavior of the raw data and the model can continue to be tested and entertained.


A model can’t do worse than explaining nothing, right?


Not these models! The differences between their predictions and the observed temperatures were significantly greater (by a factor of two) than what one would get just applying random numbers.


Ponder this: Suppose there is a multiple choice test, asking for the correct temperature forecast for 100 temperature observations, and there were four choices. Using random numbers, you would average one-in-four correct, or 25%. But the models in the National Assessment somehow could only get 12.5%!


“No information”—a random number simulation—yields 25% correct in this example, which means that anything less is anti-information. It seems impossible, but it happened.


We informed the USGCRP of this problem when we discovered it, and they wrote back that we were right, and then they went on to publish their Assessment, undisturbed that they were basing it models that had just done the impossible.