To begin the first chapter of Superforecasting, Philip Tetlock and Dan Gardner make the point that “we are all forecasters. When we think about changing jobs, getting married, buying a home, making an investment, launching a product, or retiring, we decide based on how we expect the future will unfold.” The stated goal of the book is to parse out of the general population those rare souls (roughly 2%) who are amazingly good forecasters or, as the term is coined by the authors, “superforecasters.” It is the authors’ goal not only to explain in detail why these forecasters are so good, but also to pass the knowledge about how they forecast on to others.

The secrets of superforecasters are unearthed in the old‐​fashioned way, by tracking thousands of predictions of the superforecasters—as well as the much more numerous not-so-superforecasters—that have been “dated, recorded and assessed for accuracy by independent scientific observers.” Secondarily, the authors want to make the reader think about the integral role that forecasting plays in our lives—both those forecasts we make ourselves and those forecasts that others make that affect our lives.

I agree that “we are all forecasters” based on my own experience. I regularly undertake forecasting as part of my professional work in the field of finance and also as an important part of my personal finances. Less often, some of my policy work has also involved forecasting. But as Tetlock and Gardner describe in detail the forecasting process and develop the traits of superforecasters, it made me think a lot about the methods I have used and about how useful and accurate my own forecasting has actually been.

Tetlock is a professor at the University of Pennsylvania. In 2011, he launched the Good Judgment Project (GJP) with his research (and life) partner Barbara Mellers and invited volunteers to sign up and forecast the future. This became the platform for the analysis of forecasting in the book based on the work of more than 20,000 participants who volunteered for the GJP. These participants gave their best forecasts for a variety of current events since the initiation of the GJP, including “if protests in Russia would spread, the price of gold would plummet, the Nikkei would close above 9,500, war would erupt on the Korean peninsula, and many other questions about complex, challenging global issues.” Gardner is a journalist whose book credits include titles on the intermingled issues of science, politics, and fear. Readers will note that the first‐​person narrative is often used in the book and it appears to be from the perspective of Tetlock.

Hedgehogs and foxes / The authors, likely based on a theme of one of Gardner’s previous books, make a nice distinction throughout the book between “hedgehogs,” who organize their thinking around Big Ideas and make high‐​profile public pronouncements expressed with a high degree of certainty; and “foxes,” who are more pragmatic experts who have a range of approaches and talk about possibilities and probabilities, not absolutes. The authors seem to hold a grudge against the hedgehogs: “Despite my all but begging the highest‐​profile pundits to take part, none would participate.” The authors proceed to cast aspersions on such high‐​profile “experts” who make predictions, bluntly stating that “Foxes beat hedgehogs” and citing the “inverse correlation between fame and accuracy.” In particular, they skewer supply‐​sider and Reaganite Larry Kudlow of CNBC fame for his rosy predictions on the Bush 43 economy circa 2007 and 2008. They also criticize Paul Krugman, although not for the quality of his predictions (they don’t pass judgement on that), but for his boorish behavior in engaging in public arguments that “looked less like a debate between great minds and more like a food fight between rival fraternities.”

Can you identify a superforecaster? / So what are some of the characteristics of these so‐​called superforecasters that Tetlock and Gardner spend so much time dissecting?

  • Although forecasters in general have above‐​average intelligence (higher than about 70% of the population), superforecasters are even more intelligent (higher than about 80% of the population).
  • As you might guess, those with a quantitative background would be good candidates for superforecasters: “I have yet to find a superforecaster who isn’t comfortable with numbers and most are more than capable of putting them to practical use.”
  • They regularly update their forecasts as new information becomes available, poring over the news on topics related to their forecasts to discern nuggets of useful information that might be applied to improve their accuracy. They note, “Superforecasters update much more frequently, on average, than regular forecasters.”
  • Superforecasters also have what the authors call a “growth mindset.” Unlike people who have a fixed mindset, displayed by a mentality of someone who thinks he or she is bad in math and that this is an immutable trait, those with a growth mindset have a completely different outlook. They believe that their abilities are “largely the product of effort—that you can ‘grow’ to the extent that you are willing to work and learn hard.”
  • Going beyond the realm of individual superforecasters, the authors also determined that forecasting is a team sport. “The results were unequivocal: teams were 23% more accurate than individuals.” This is the case for a number of reasons, including the positive traits of gathering and sharing of information and sharing of perspectives, which outweigh any adverse inclination toward “cognitive loafing” whereby team members slack off hoping that others will do the heavy lifting that is needed to develop good forecasts.

But Tetlock and Gardner do not leave these superforecasters to wallow in anonymity. They get personal and “out” superforecasters like Doug Lorch: “He looks like a computer programmer, which he was, for IBM…. Doug has no special expertise in international affairs, but he has a healthy curiosity about what’s happening. He reads the New York Times. He can find Kazakhstan on a map.” Lorch developed forecasts for about 104 of the project questions and got an overall Brier score of 0.22 (on a scale of 0 to 2, which measures the distance between the forecast and subsequent reality, so the lower the better). Devyn Duffy volunteered for the GJP because he was unemployed: “My most useful talent is the ability to do well on tests, especially multiple‐​choice. This has made me appear more intelligent than I actually am, often even to myself.”

Detour and dead end / Superforecasting takes a bit of a detour at one point and tries to be a management book. One chapter delves into the use of forecasting by leaders, particularly military leaders. It does so by demonstrating uncertainty where a situation changes so dramatically that plans need to be abandoned and improvisation takes over: “No plan survives contact with the enemy.” This discussion diverts attention away from pure forecasting issues and I don’t see how it fits within the confines of the superforecasting theme.

I am not a superforecaster / As for trying to judge my own forecasts based on the methodologies set out in Superforecasting, the results are a mixed bag. In a Cato Policy Analysis (#293, December 1997) I referred to Fannie Mae and Freddie Mac, the government‐​sponsored mortgage giants, as “financial time bombs” based on the contingent liability they posed for the government combined with their inherent mix of leverage, hidden off‐​budget status, and high‐​profile political engagement, combined with a long history of similar government bailouts. I was publicly and roundly criticized by Adolfo Marzol, a senior Fannie Mae official, for making such a prediction.

Of course, Fannie and Freddie did go boom in 2008. Although I have always thought that mine was quite the impressive forecast, especially given that only a handful of people publicly predicted it, Tetlock and Gardner’s method casts some doubt on my positive self‐​assessment: “Forecasts must have clearly defined terms and timelines. They must use numbers.” So although what I predicted did ultimately occur, the fact that I made an open‐​ended forecast, without a certain date or estimated magnitude of the failure, makes it a much less impressive feat.

I think I have done a little better when it comes to personal finance–related forecasts. Like superforecasters, I like to update my forecasts on a regular basis. I also get a range of advice from what might be called a “team” of forecasters who undertake similar personal finance forecasts.

What a policy reader will like / Although the phenomenon of superforecasters is interesting enough to keep a reader engaged, I think a policy audience will find the hedgehog vs. fox distinction and related analysis the most interesting of the topics addressed. The aftermath of the Brexit vote early this year is consistent with this useful lesson of Superforecasting. In the Brexit case we had the crazy whiplash of self‐​appointed “experts” shouting their disdain for the concept and forecasting a market crash if Brexit happened—in some cases, one of Lehman Brothers proportions. Market losses became a self‐​fulfilling prophecy in the immediate aftermath. However, within a few days, with the exception of a select few sectors, the markets made up most of those losses.

Based on Tetlock and Gardner’s research, if you see well‐​known experts at a policy forum talking about what the future will hold, you can conclude that they are likely not very good forecasters, notwithstanding their boasting to the contrary. As Tetlock and Gardner also found, you should ask the experts for historical data on their prior forecasts and in most cases they will be at a loss to provide it.