Below, Mark links to a fascinating-looking paper pointing out that government regulators are human, too, and therefore subject to the same cognitive foibles as the rest of us.


It might seem pretty surprising to Cato-style classical liberals that this sort of application of behavioral research didn’t immediately leap out to researchers. But on reflection, it makes sense that the first bunch of policy implications to be suggested from a new area of research will tend to reflect the ideological preferences of the investigators. This need not imply any kind of willful axe-grinding bias. This kind of unwitting bias, in fact, illustrates a few of the main points of behavioral economics: We don’t have unbounded cognitive capacities, the mind uses lots of quick and dirty rules of thumb, and we can’t count on those speedy cognitive tricks to conform to canonical standards of rationality. Even brilliant economists and sage government regulators simplify the complexity of the real world by passing it through sometimes shoddy ideological filters — even while attempting to draw out the implications of that very phenomenon.


Here are a couple more examples of papers drawing on behavioral research that don’t have an obvious ideological tendency. In a paper under review at Public Choice, “Behavioral Economics and Perverse Effects of the Welfare State” [doc], Bryan Caplan and Scott Beaullier write:

Critics often argue that government poverty programs perversely make the poor worse off by discouraging labor force participation, encouraging out-of-wedlock births, and so on. However, basic microeconomic theory tells us that you cannot make an agent worse off by expanding his choice set. The current paper argues that familiar findings in behavioral economics can be used to resolve this paradox. Insofar as the standard rational actor model is wrong, additional choices can make agents worse off. More importantly, existing empirical evidence suggests that the poor deviate from the rational actor model to an unusually large degree. The paper then considers the policy implications of our alternative perspective.

The policy implications would make Charles Murray smile. And here is a working paper by Daniel Benjamin, Sebastian Brown, and Jesse Shapiro showing that

… higher cognitive ability — especially mathematical ability — is predictive of much lower levels of small-stakes risk aversion and short-run impatience. For example, we calculate that a one-standard-deviation increase in measured mathematical ability is associated with an increase of about 8 percentage points in the probability of behaving in a risk-neutral fashion over small stakes (as against a mean probability of about 10%) and an increase of about 10 percentage points in the probability of behaving patiently over shortrun trade-offs (with a mean of about 28%).

And what are we to make of that? The authors somewhat tepidly suggest that better education might improve poor cognitive ability a bit, though they recognize that differences in ability run deeper than differences in schooling. More intriguingly, their results go to the heart of the currently raging inequality debate. Because the more “cognitively able” are less likely to make errors relative normative standards of risk and expected utility, they’re likely to do better at choosing the elements of an investment portfolio:

Our results also suggest additional reasons why the overall returns to cognitive ability may be underestimated by focusing solely on the labor market returns … we might conjecture that a one-standard-deviation increase in cognitive ability is worth about 0.3% of lifetime wealth due to improved portfolio allocation alone. Since portfolio choice is only one of many important household decisions that are affected by cognitive ability, the total value of cognitive ability’s effect on decision-making could be quite substantial.

If changes in the economy have increased the payoff to the decisions affected by cognitive ability, that might explain some changes in wealth inequality.


Behavioral economics done right is just good science. The real peril is in the transition over the gap from psychology to policy. Big philosophical and ideological assumptions lurk in the gap. It’s very important to make those assumptions explicit, and defend them. Unfortunately, that’s too rarely done