Beginning in the 1990s, Oakland Athletics general manager Billy Beane gained acclaim for using statistical analysis to identify undervalued players and baseball strategies, which he then used to turn the small-budget team into a consistent winner. Since then, all sorts of analysts have proposed applying similar “Moneyball” strategies to other human endeavors, including government.

Who could oppose collecting data about government spending, building evidence to implement effective programs, and directing funds away from failing policies? In the new book Moneyball for Government, a plethora of writers, policy wonks, and two former heads of the federal Office of Management and Budget make the case that a data-driven approach to government and regulation would create better results at a lower cost to taxpayers.

Data and analytics about how government operates could certainly be improved. But whatever the apparatus that policymakers establish to measure government, self-interested politicians must still pay attention to the findings and be willing to cut failing programs. Bill Niskanen noted that in these pages many years ago (“More Lonely Numbers,” Fall 2003), and I share his skepticism about that possibility.

But that skepticism may be uncalled for, claim book contributors and political advisers Kevin Madden (a Republican) and Howard Wolfson (a Democrat). They argue that both sides of the aisle have incentive to play Moneyball with government.

Republicans presumably would benefit by pushing for more efficient government rather than being labeled “antigovernment,” though that distinction may be lost on some Republican politicians. For Democrats, Wolfson proudly trumpets their strong record of fiscal responsibility. He points out that President Obama has recently been reducing the deficit at the fastest rate since World War II—though he doesn’t say that this reduction is from the trillion-dollar deficits Obama rang up early in his presidency. If this represents the authors’ idea of an honest use of data then maybe we should forget the Moneyball endeavor altogether. Wolfson also spouts off standard attack lines on Republicans that sound like they were taken straight from a Senate communications director’s cheat sheet, and that makes reading his portion of the book a chore for anyone who dislikes hackery. Fortunately, he does ultimately circle back to discussing the idea that data-driven government will make “people’s lives better.”

Of course, Madden and Wolfson are right in theory that both parties have incentive to learn more about government programs and regulation in order to drive better policy. The hurdle for applying Moneyball to government—as opposed to just one or two instances that happen to follow party dogma—is that statistical analysis will sometimes indicate that a strongly favored program is failing. Perhaps politicians of both parties can accept “Moneyballing” USAID, but what about Social Security or defense appropriations?

Rest assured, Madden, Wolfson, and other book contributors are willing to criticize some government programs. But too often their policy recommendations are for more government, such as establishing whole new offices for policy evaluation. There is a call for a “chief evaluation officer” in every federal agency, agencies setting aside up to 1 percent of each agency’s budget for evaluation, and the establishment of “cross-government prizes for innovative approaches to evaluation.” Supposedly, those actions would lead to agency innovation where before the agencies were content with mediocrity. Whether the benefits of the new measuring devices are worth the costs is up for debate.

From an agency perspective, the biggest obstacle to evaluation may be fear. In previous Moneyball initiatives (and there have been previous attempts), agencies proved reluctant to change, in part because they feared that success would result in budget cuts from appropriators.

The book devotes significant attention to the distinction between data and evidence. There is plenty of data on government programs, but as the authors argue, little evidence that demonstrates what is working and what is failing. Initial evidence, through a randomized controlled trial, may reveal that a specific regulation or program is not generating the promised benefits. But even some of the book’s contributors don’t seem willing to heed such findings, as former agency heads caution that the initial results of such analysis should not portend the end of a program.

That’s the problem with policymakers. Scores of analysts can point to failing or wasteful programs, but there will always be a constituency or special interest prepared to defend each program, and they have more at stake in that spending battle than good-government advocates. More data on evaluation will only create a more efficient government if politicians care enough about the data, and there is plenty of evidence today that they do not.

There are several references to the Office of Information and Regulatory Affairs (OIRA) as a paragon for good data and program evaluation. Any critical followers of OIRA will question that praise. OIRA reviews less than 10 percent of all federal rules each year and while that review might be extensive, political considerations from OIRA’s White House overseers are common. What’s more, wide swaths of the economy are exempt from its oversight: Dodd-Frank, for instance, is virtually exempt from OIRA review.

More faint praise for OIRA is inspired by its “government-wide” retrospective regulatory review that was done in 2011, and is supposedly continuing today. First, the review wasn’t government-wide, as it wasn’t mandatory for independent agencies. And while the book’s authors may claim that “the lookback process yielded scores of measures to update regulatory regimes,” the reality is decidedly different. As Ike Brannon and I have argued in these pages (“First-Year Grades on Obama Regulatory Reform,” Spring 2012), many of those updates were in fact just new regulations implementing new programs rather than an honest review of past regimes. Retrospective review has been successful at increasing the nation’s regulatory tab, all under the guise of “reform.”

There are dangers in embracing so-called Moneyball for government. New hires designated for program evaluation could face resistance from agencies, just as the Government Accountability Office does currently. One percent budget set-asides for evaluation could evolve into 1‑percent add-ons, with few politicians willing to act on the recommendations. And it’s hard to see how regulatory capture wouldn’t rear its ugly head sooner rather than later in such an arrangement.

That’s not to say there aren’t good ideas in this book. For one, interagency data sharing that allows the public to view which programs are failing and which are the most efficient will undoubtedly place additional pressure on legislators. But to expect such efforts to result in a government that functions as well as the private sector is optimistic. The federal government is unlikely to function as efficiently as Beane’s A’s, but performance akin to last season’s New York Yankees is within reach.