In the layman’s way of thinking, a regulation that saves just one life is worthwhile regardless of its costs. In the economic way of thinking, a regulation that saves just one life is worthwhile only if it costs less than the value of a life. Cass Sunstein expands on this economic way of thinking in his new book, Valuing Life. In it, he documents his experience overseeing the Office of Information and Regulatory Affairs (OIRA) in the early years of the Obama administration, explains how the regulatory apparatus works, and shares his views on “humanizing” the process.

By “humanizing the regulatory state,” Sunstein wants to accomplish four objectives. The first is to justify a widespread application of cost-benefit analysis. He puts it this way: cost-benefit analysis “should see costs and benefits not as arithmetic abstractions, but as efforts to capture qualitatively diverse goods and to promote sensible trade-offs among them.” His second objective is to recognize “nonquantifiable” factors such as “dignity,” “equity,” and “privacy.” The third is to incorporate behavioral economics into cost-benefit analysis, and the fourth is “to collect the dispersed information held by a nation’s citizenry” and use that to formulate regulations.

OIRA’s work / Sunstein wants to remedy the “poorly understood” operation of OIRA, which neither originates nor rubber-stamps regulations. “It would not be excessive,” in his view, “to describe OIRA as, in large part, an information aggregator.” By gathering this “specialized information” and “dispersed information,” he sees OIRA trying to overcome the “knowledge problem” outlined by Friedrich Hayek.

OIRA aims “to promote a well-functioning process of public comment, including state and local governments, businesses large and small, and public interest groups.” Sunstein describes this as “regulatory due process.” There are four essential aspects of OIRA procedure:

  • OIRA incorporates “interagency views.”
  • The discussion of these interagency views explains why the OIRA procedure may be long and drawn out.
  • Even though OIRA focuses on costs and benefits, it concentrates more on “interagency concerns, promoting the receipt of public comments (for proposed rules), ensuring discussion of alternatives, and promoting consideration of public comments (for final rules).”
  • OIRA’s work is “technical,” relating to economics, science, the law, etc.

Valuing Life is not a manual with detailed instructions on how to calculate costs and benefits. Sunstein presents dozens of “highly stylized problems” with the costs and benefits given, and tells us how the process will continue from there. The simplest problem he presents is a regulation that costs $200 million per year and generates $400 million in benefits per year. “In the process of OIRA review, the numbers will be carefully scrutinized, and many questions will be asked about their accuracy and meaning,” he assures us, “but if those questions have good answers, this is an easy one in favor of proceeding.”

As part of that work, government agencies estimate a “value of a statistical life (VSL).” Sunstein explains: “Suppose that workers must be paid $900, on average, to assume a risk of 1:10,000. If so, the VSL would be said to be $9 million.” For an alternative approach to reach the same VSL, suppose that one in 10,000 firefighters will die on the job and the typical firefighter is willing to pay $900 to eliminate that risk. Multiplying $900 per firefighter by 10,000 firefighters yields $9 million, which is the value of eliminating the risk necessary to save one firefighter’s life. The simplest problem Sunstein presents that involves a life-saving regulation costs $300 million per year and saves 40 lives. Using the $9 million VSL, the benefits of the regulation amount to $360 million and “the regulation will likely go forward.” Note that if this regulation were to save just one life, it would be a mistake to implement it because the costs would exceed the benefits.

Humanizing regulation / What circumstances make a problem more challenging? Take a regulation that costs $200 million per year. The agency proposing it estimates that it will save 24 people from dying of cancer, and the VSL is $8 million. Although the benefits in this case in terms of lives saved ($192 million) are less than the costs, “it will be acceptable for the agency to do a sensitivity analysis in which it increases the VSL because cancer is involved.” This is one instance in which Sunstein advocates “humanizing” regulations.

Consider his reasons for humanizing regulations related to cancer. “For example,” he reports, “some evidence suggests that people are willing to pay high amounts to avoid cancer risks, and hence there is reason to think that people’s VSL is higher for cancer risks than sudden, unanticipated deaths.” Perhaps a greater aversion to death from cancer than, say, a heart attack is rational. Sunstein adds that “all cancer fatalities are not the same; informed people would surely make distinctions between those that involve long periods of suffering and those that do not.” Despite people’s greater willingness to pay to avoid some risks over others, OIRA does not “distinguish among mortality risks” and never has. However, an agency may do “sensitivity analysis,” add a “cancer premium” to the VSL, and possibly “conclude that the benefits ‘justify’ the costs.” That would amount to the same thing as computing different VSLs based on different risks, would it not?

Sunstein is well known for using insights from behavioral economics to shape public policies. In Valuing Life, he describes a behavioral slip-up dubbed “probability neglect” that relates to cancer risk. “People fall victim to probability neglect,” he explains, “if and to the extent that the intensity of their reaction does not greatly vary even with large differences in the likelihood of harm.” Take the results of an experiment Sunstein and a colleague conducted with their law students. They queried a first group “to state their maximum willingness to pay to eliminate a cancer risk of 1 in 1 million.” They put the same question to another group but increased the risk of cancer to one in 100,000. A third group faced the same question as the first, plus “the cancer was described in vivid terms, as ‘very gruesome and intensely painful, as the cancer eats away at the internal organs of the body.’ ” The fourth group faced the same question as the second, along with the “emotional description” of cancer.

If the subject of cancer causes people to neglect probability, their willingness to pay to eliminate a risk of one in 100,000 will be less than 10 times that for a risk of one in 1 million. That is what Sunstein and his colleague found when asking questions both with and without the “emotional description” of cancer. They also expected that adding the emotional description would cause greater probability neglect, and confirmed that, too. Subjects who heard the emotional description stated a mean willingness to pay of $211.67 to eliminate the one in 1 million risk of cancer, compared to $250 to eliminate the risk of one in 100,000. “When the cancer was described in emotionally gripping terms,” in other words, “people were insensitive to probability variations.”

This reviewer doubts that we should attach much weight to a single experiment involving 67 students at Harvard Law School. Nonetheless, Sunstein draws from it “two implications for the public reaction to emotionally gripping events.” One is that “simply because such events arouse strong feelings, they are likely to trigger a larger behavioral response than do statistically identical risks that do not produce emotional reactions.” This is the rationale an agency uses when considering a “cancer premium” along with other benefits of a regulation designed to reduce the risk of cancer. Another implication is “that probability neglect might well play a role in the government’s reaction to emotionally gripping events, in part because many people will focus on the badness of the outcome, rather than on its likelihood.”

This point raises the question of what the government should do when events such as terrorism raise the public’s alarm. On the one hand, it would be wise to do nothing. “There is a strong argument that government should not respond,” Sunstein reasons, “if the relevant risks are very small and if the requested steps have costs in excess of benefits.” There is also a role for the government “to inform and educate people” whenever the probability of a tragic event is low. “But if information and education do not work,” Sunstein continues, “government might be willing to consider regulatory responses to fears that are not fully rational, but real and by hypothesis difficult to eradicate.”

He is not using probability neglect as an excuse to open the door wide for more regulations. He warns that “a special difficulty here consists in the problem of quantifying and monetizing fear and its consequences.” Quantification and monetization are more ways of “humanizing” regulations.

He presents a scenario in which the costs of a regulation “to make buildings more accessible to people who use wheelchairs” exceed the “monetized benefits.” Officials proceed with the regulation, nevertheless, by making a case that the value of “human dignity” to wheelchair users makes up for the deficiency of monetized benefits. Sunstein cites the actual reasoning of Justice Department officials from a document pertaining to the Americans with Disabilities Act: “Dividing the $32.6 million annual cost by the 677 million annual uses [of water closets with doors that open outward, making them more accessible], we conclude that for the costs and benefits to break even in this context, people with the relevant disabilities will have to value safety, independence, and the avoidance of stigma and humiliation at just under 5 cents per use.” It seems plausible that this lower bound on the value of human dignity would justify the costs of modifying such water closets. Sunstein recognizes “objections” to quantifying or monetizing benefits, though he continues to advocate those practices.

Moral heuristics / I had suspected that Sunstein was eager to regulate. In Valuing Life, however, his presentation of behavioral economics causes me to reconsider.

“Heuristics” are “mental shortcuts” that people use when making decisions. Even though they may be reliable sometimes, they may also produce bad outcomes. For anyone who ever asked whether behavioral economists ever cite the anomalies they are so fond of to make a case against regulation, Sunstein’s explanation of “moral heuristics” is evidence that he, for one, does.

Take the “Precautionary Principle,” which according to Sunstein “is designed to insert a ‘margin of safety’ into all decision making, and to impose a burden of proof on proponents of activities or processes to establish that they are ‘safe.’ ” (See “The Paralyzing Principle,” Winter 2002–2003.) That idea sounds reasonable initially. But Sunstein deems it “incoherent.” “The reason,” he explains, “is that risk regulation often introduces risks of its own.”

His critique is so effective that this reviewer wonders why the Precautionary Principle is not less popular. “For example,” he continues, “regulation of nuclear power might increase the likelihood that societies will depend on fossil fuels, which create air pollution and emit greenhouse gases.” The following point not only goes against the Precautionary Principle, but too much regulation in general: “By its very nature, costly regulation threatens to increase unemployment and poverty, and both of these increase risks of mortality.”

Heuristics explain the principle’s intuitive appeal. One is the “act-omission distinction,” whereby regulators prohibit endeavors with visible risks (such as the Keystone XL pipeline) even though prohibition entails less visible risks (war over oil in the Middle East). To be clear, Sunstein does not recognize moral heuristics in order to reduce the number of pages in the Federal Register. His goal is to refine cost-benefit analysis. Whether his acolytes show as much restraint when applying behavioral economics to formulate regulations remains an open question.

Conclusion / Valuing Life contains no battle stories involving regulators, politicians, and lobbyists arguing over any regulation. There are a few glaring errors that may be excused. Pertaining to “a regulation designed to reduce the incidence of prison rape,” Sunstein writes, “If a single rape is valued at $500,000, the rule would be easily justified if it prevented only 1,600 rapes.” It is safe to assume that he intended to write “if preventing a single rape is valued at $500,000.” Likewise when he wrote “a dollar today is worth less than a dollar tomorrow,” he intended to write “a dollar today is worth more than a dollar tomorrow.”

Readers might be surprised to learn that OIRA listens to “businesses and others” who resist regulations on more occasions than it listens to “public interest groups” who favor them. Sunstein claims that OIRA avoids politics. “At least in my experience (and some people will find this surprising),” he admits, “ ‘politics,’ in the sense of interest-group pressures and electoral considerations, usually does not play a significant role in the regulatory process.”

Although he teaches that there may be too much regulation as well as too little, he maintains that “the financial crisis of 2008 and succeeding years was, in part, a product of insufficient regulation, which could have provided safeguards against systemic risks.” Even the Financial Crisis Inquiry Commission Report, which faults free-market ideology for the crisis, also blames regulatory forbearance. Sunstein emphasizes his appreciation of Hayek. “The Hayekian theme,” he explains, “emphasizes the dispersed nature of human knowledge and OIRA’s role in attempting to acquire as much of that knowledge as possible, above all through careful attention to public comments.” In his conclusion he acknowledges that “it is an understatement to say that [Hayek] would not have endorsed all of the arguments in this book (much less all of the regulations that the United States has issued in the name of public health, safety, and environmental protection).” Given the knowledge problem, Sunstein believes that OIRA’s role as “an information aggregator” is an appropriate way to deal with it. One wonders whether Hayek would endorse that approach or judge it quixotic.