Thanks to news cycles and short attention spans, pundits get away with murder. Columnists and talking heads can issue endless prognostications about what Iraq will look like in another six months, and because nobody’s going to remember to follow up six months on, it doesn’t matter whether they were right.

Last week, pro-war liberals Michael O’Hanlon and Kenneth Pollack wrote a New York Times op-ed arguing that the Iraq troop surge was working and should be extended into 2008. They may be right, but based on their track records, it’s doubtful. And track records ought to matter. Rather than hiding behind arguments that are couched in conditionals and mushy language, pundits should put specific predictions on the record, in clear, falsifiable language, so that the public can better determine who among us actually knows what he’s talking about.

Foreign-policy analysts have an incredibly difficult task: to make predictions about the future based on particular policy choices in Washington. These difficulties extend into the world of intelligence, as well. The CIA issues reports with impossibly ambitious titles like “Mapping the Global Future”, as if anyone could actually do that. The father of American strategic analysis, Sherman Kent, grappled with these difficulties in his days at OSS and CIA. When Kent finally grew tired of the vapid language used for making predictions, such as “good chance of”, “real likelihood that” and the like, he ordered his analysts to start putting odds on their assessments. When a colleague complained that Kent was “turning us into the biggest bookie shop in town”, Kent replied that he’d “rather be a bookie than a [expletive] poet.”

Kent’s instinct was right. More bookies and fewer poets are what the United States needs, both in intelligence analysis and in foreign-policy punditry. University of California Berkeley professor Philip Tetlock examined large data sets where experts on various topics made predictions about the future. He was troubled to discover “an inverse relationship between how well experts do on scientific indicators of good judgment and how attractive these experts are to the media and other consumers of expertise.” He proposed one way to reform the situation: conditioning experts’ appearance in high-profile media venues on “proven track records in drawing correct inferences from relevant real-world events unfolding in real time.”

Which brings us back to the authors of the New York Times piece. Michael O’Hanlon, for example, argued in February 2004 that the “dead-enders are few in number and have little ability to inspire a broader following among the Iraqi people.” Kenneth Pollack gained notoriety for his publication of The Threatening Storm, a book that argued Saddam Hussein was close to obtaining nuclear weapons and was not a deterrable actor.

So, the argument goes, why should they be revered as authorities, given that they’ve been so wrong in the past?

It’s a fair question. The best way to correct the situation is by developing a predictions database, where experts can weigh-in on specific, falsifiable claims about the future, putting their reputations on the line. Something like this was envisioned in a DARPA program developed under Admiral John Poindexter in 2003. The so-called “policy analysis market” was designed to allow analysts to buy futures contracts for various scenarios. As the value of these contracts went up or down, other analysts could observe and investigate why, determining how and why others were “putting their money where their mouths were”, and whether they should do the same.

But the “policy analysis market” sank beneath a wave of demagoguery from congressmen who had an astonishing lack of understanding how prediction markets are used to great effect in the investment banking, insurance and other industries.

To cite one historic example, if there had been such a market before 9/11, Coleen Rowley, the FBI agent who detected and arrested Zacarias Moussaoui and whose attempts to further investigate the conspiracy were stymied, could have taken her suspicions to the futures market. As her behavior moved the market, other observers would have had an incentive to investigate why she was so certain that a dangerous plot was afoot.

There are a number of similar enterprises that have begun since 9/11. Foreign Policy magazine publishes a “terrorism index” in which foreign policy experts predict the likelihood of various events. The results are not encouraging — in the 2006 version, 57 percent of experts said that an attack on the United States “on the scale of those that took place in London and Madrid” was either “likely or certain” before the end of 2006.

Predicting the future is hard, and if nothing else, pundits are experts at explaining why their failed predictions are somebody else’s fault. It may be the case that even the best experts rarely make accurate predictions of important events. But the only way to better our predictions in the future is to learn not just who gets things right, but why. Putting our reputations where our mouths are would teach us a great deal.