Patrick Frank is a scientist at the Stanford Synchrotron Radiation Lightsource (SSRL), part of the SLAC (formerly Stanford Linear Accelerator Center) National Accelerator Laboratory at Stanford University. The SSRL produces extremely bright x‑rays as a way for researchers to study our world at the atomic and molecular level.

In a bit of a shift, Frank has shone a bright light on general circulation models (GCMs)—models used to predict long-term changes in climate—and illuminated some fatal flaws. His bottom line is that these models, as they stand today, are useless for helping us understand the relationship between greenhouse gas emissions and global temperatures. This means that all the predictions of dramatic impending warming and ancillary calls for strong government action are based on conjecture.

Modeling climate / The atmosphere is about 0.8∘ Celsius warmer than it was 100 years ago. Given that the atmospheric concentration of carbon dioxide has risen 40% since 1750 and that CO2 is a greenhouse gas, we have the antecedents for a compelling hypothesis: the increase in CO2 has caused, and is causing, global warming.

But a hypothesis is just that. For obvious reasons we can't test this hypothesis by running a controlled experiment where we increase and lower CO2 levels around the globe and measure the resulting change in temperatures. Instead, scientists build sophisticated GCMs based on known and assumed physical properties and run them on supercomputers. They then compare the forecasted results for various scenarios to each other and to reality.

GCM forecasts for the years 1998–2014 predicted much greater warming than what actually happened. If the models were doing a good job, their predictions would cluster symmetrically around the actual measured temperatures. That is not the case here; a mere 2.4% of the predictions undershot actual temperatures and 97.6% overshot, according to Cato Institute researchers Patrick Michaels, Richard Lindzen, and Chip Knappenberger. Climate models as a group have been "running hot," predicting about 2.2 times as much warming as what actually occurred over 1998–2014. Of course, this doesn't mean that no warming is occurring, but rather that the models' dire forecasts have been wrong—at least so far.

Clouds / Another way to assess models is to look at internal errors and systematic flaws. Here's where Frank comes in.

To build a successful climate model, researchers need to include all the factors that can affect atmospheric temperatures. One factor to include is CO2 levels. Another is clouds. Clouds both reflect incoming and trap outgoing radiation. A world entirely encompassed by clouds would have dramatically different atmospheric temperatures than one devoid of clouds. But modeling clouds and their effects has proven very difficult. The Intergovernmental Panel on Climate Change (IPCC), the established global authority on climate change, acknowledges this in its most recent Assessment report:

The simulation of clouds in climate models remains challenging. There is very high confidence that uncertainties in cloud processes explain much of the spread in modelled climate sensitivity. [bold and italics in original]

What is the net effect of cloudiness? A cooler atmosphere. Some 342 watts per square meter (Wm–2) reach the earth's atmosphere, on average, keeping it warm enough for us to thrive. Clouds both reflect incoming solar ultraviolet radiation, providing a cooling effect, and prevent the escape of infrared energy back into space, supplying a warming effect. The net effect of clouds is to cool the atmosphere by about 25 Wm–2. This means that without clouds, more energy would reach the ground and our atmosphere would be much warmer. There's the rub.

Clouds are hard to measure and predict, and climate models have an uncertainty of ±4.0 Wm–2 that is due purely to clouds. This error is 114 times as large as the estimated extra energy from excess CO2 (±4.0 Wm–2 versus 0.035 Wm–2). In totality, the combined errors in climate models produce an uncertainty of about ±150 Wm–2, which is equal to 44% of all incoming energy and is over 4,000 times as large as the estimated extra energy from higher CO2 concentrations. The underlying question that Frank raises needs to be addressed by climate scientists: How can the faint CO2 signal possibly be detected by climate models hampered with such gigantic errors?

Frank points out that systematic cloud errors are the same across climate models, and shows that this could occur only if two conditions hold: (1) all the climate models employ the same theory, and (2) that theory is flawed. Further, Frank has published papers that explain how the errors in temperatures recorded by weather stations have been incorrectly handled and how temperature readings have an error of ±0.46∘C, not the ±0.2∘C claimed by others. In a 2011 article in the journal Energy & Environment, he states:

The 1856–2004 global surface air temperature anomaly with its 95% confidence interval is 0.8∘C ± 0.98∘C. Thus, the global average surface air temperature trend is statistically indistinguishable from 0∘C.

For our purposes, we will focus on the fact that the CO2 "signal" that climate scientists say is responsible for increasing temperatures is overwhelmed by the error in their own models.

It's really a signal versus instrument resolution issue. If you are timing a high school track athlete running 400 meters at the beginning of the school year, and you measure 56 seconds with your stopwatch that reads to ±0.01 seconds and your reaction time is ±0.2 second, then with your equipment you can clearly measure an improvement to 53 seconds by the end of the year. The difference in the two times is far larger than the resolution of the stopwatch combined with your imperfect reaction time, allowing you to conclude that the runner is indeed now faster.

What if this runner then dons some high-tech running shoes designed to knock 0.05 off the 53-second time? Your hypothesis is that the runner would be faster because of the fancy shoes, but can you actually measure such a small difference with the instrumentation at hand? No. There is no point in even running the experiment because you will have no way of knowing if the runner is slightly faster, is running at the same speed, or is slightly slower. That's the case with climate models and CO2.

That's our way of putting it. In a 2008 article in the journal Skeptic, Frank puts it this way:

It's as though a stronger and stronger distorting lens was placed in front of your eyes every time you turned around. First the flowers disappear, then the people, then the cars, the houses, and finally the large skyscrapers. Everything fuzzes out leaving indistinct blobs, and even large-scale motions can't be resolved. Claiming GCMs yield a reliable picture of future climate is like insisting that an indefinable blurry blob is really a house with a cat in the window.

The IPCC has looked at a number of different cases and it reports that temperatures could be, in the worst case, up to 4∘C higher by 2100. However, based on Frank's work, when considering the errors in clouds and CO2 levels only, the error bars around that prediction are ±15∘C. This does not mean—thankfully—that it could be 19∘ warmer in 2100. Rather, it means the models are looking for a signal of a few degrees when they can't differentiate within 15∘ in either direction; their internal errors and uncertainties are too large. This means that the models are unable to validate even the existence of a CO2 fingerprint because of their poor resolution, just as you wouldn't claim to see DNA with a household magnifying glass.

In a recent podcast for the Center for Industrial Progress, Frank concludes thus:

Large systematic errors make projections of future Earth temperatures entirely unreliable. What do climate models reveal about a human [greenhouse gas] fingerprint on the terrestrial climate? Nothing.

He adds:

One cannot say that CO2 has definitely caused the mild warming in the climate. We have absolutely no idea what is going on. The climate has warmed and cooled in the past without any changes at all from us or from changes in carbon dioxide or apparently in greenhouse gases, and the changes that we've seen are well within natural variability, and so as far as we can tell, nothing important is going on.

He argues that, given this, there's no scientific merit to predictions of dramatic future warming, there's no reliability of the IPCC's warnings, and there's no evidence of a looming climate disaster from CO2 emissions. The most rational thing to do right now about the "problem" of CO2 emissions? Have the courage to do nothing.

For all the drumbeat of "the science is settled," the relationship between CO2 emissions and climate change is a topic where our ignorance is far greater than our understanding. It would be wiser to wait until we have a more complete understanding of the atmosphere before committing to expensive policies. To do otherwise is to act based on conjecture.

Readings

  • "A Climate of Belief," by Patrick Frank. Skeptic, Vol. 14, No. 1, 2008.
  • "Dr. Patrick Frank on the Accuracy of Climate Models," podcast hosted by Alex Epstein. Power Hour, Center for Industrial Progress, Sept. 8, 2016.
  • "Evaluation of Climate Models," Chapter 9. In Fifth Assessment Report, International Panel on Climate Change, 2014.
  • "Imposed and Neglected Uncertainty in the Global Average Surface Air Temperature Index," by Patrick Frank. Energy & Environment, Vol. 22, No. 4 (2011).
  • "Is There No 'Hiatus' in Global Warming After All?" by Patrick J. Michaels, Richard Lindzen, and Paul C. "Chip" Knappenberger. Cato@Liberty (blog), Cato Institute, June 5, 2015.
  • "No Certain Doom: On Physical Accuracy in Projected Global Air Temperatures," speech by Patrick Frank to Doctors for Disaster Preparedness, Omaha, Neb., July 10, 2016.