Cost-benefit analysis (CBA) has become the standard method used by regulatory agencies like the Environmental Protection Agency to evaluate potential regulations, yet there is a major, puzzling exception. The financial regulatory agencies—the Securities and Exchange Commission, the Commodities Futures Trading Commission, the Federal Reserve, the Federal Deposit Insurance Corporation, and the Office of the Comptroller of the Currency (OCC), among others—do not use CBA. This is a bizarre anomaly in our regulatory system, and Congress and the judiciary have recently woken up to it. In 2011, the D.C. Circuit Court of Appeals struck down an SEC regulation because the SEC failed to provide an adequate CBA for it. And in Congress, Sens. Mike Crapo (R‑Idaho) and Richard Shelby (R‑Ala.) have introduced a bill that would require the financial regulators to conduct CBA of proposed regulations.

Those developments have thrown financial regulators into a panic. When the two of us conducted interviews at a major financial regulator last summer about a separate regulatory proposal, we heard again and again about worries that the agency would not be able to conduct cost-benefit analyses that would satisfy the D.C. Circuit. The problem, as we will discuss below, is that because regulators never did adequate cost-benefit analyses in the past, they do not know how to do it now, nor is there an academic literature that tells them how to do it. They were like blind men locked in a dark room scrambling to find the key to the door.

To address that problem, we convened a conference on “Cost-Benefit Analysis for Financial Regulation” at the University of Chicago with funding from the Alfred Sloan Foundation. We invited leading financial economists and lawyers to discuss ways for regulators to conduct financial CBA. Below we discuss what we learned from the conference.

CBA OF FEDERAL REGULATIONS

First, some background: Regulators receive authority to regulate from Congress, which typically provides them with very broad discretion. Congress tells regulators to control air or water pollution but, with some exceptions, does not tell them exactly what clean air and water standards to use. Similarly, Congress doesn’t tell the Occupational Safety and Health Administration how high safety railings in workplaces must be; OSHA itself makes such judgments pursuant to a broad grant of power. Thus, regulators must decide on their own how strictly to regulate. When the EPA sought to remove arsenic from water supplies, it could not order municipal water supplies to reduce the level to zero because that is essentially impossible and trace amounts of arsenic cause little or no harm. The EPA needed to figure out a standard that is strict enough to advance human health but not so strict as to raise the price of water unduly.

Before the era of CBA, agencies appeared to use a kind of intuitive balancing where they took into account the qualitative benefits from stricter regulation (health improvements) and the costs, while also considering other factors such as job loss, political pushback, and so on. The explanations that accompanied the regulations overflowed with unilluminating boilerplate and it was difficult to understand why regulators chose a particular standard rather than one that was stricter or less strict.

All of that changed in 1981 when President Ronald Reagan issued an executive order directing regulatory agencies to conduct CBA of “major” regulations, meaning regulations that had an economic impact of at least $100 million per year. Regulations that failed a CBA or were not accompanied by a valid CBA would be blocked by the Office of Information and Regulatory Affairs (OIRA), an office within the Office of Management and Budget, and returned to the regulator for additional work.

Many liberals criticized President Reagan because they believed the cost-benefit requirement was just a bureaucratic hurdle that would slow down regulation or compel deregulation. But eventually many of the critics came to see the advantages of CBA. They discovered that it does not block regulation; it just requires that the regulation be cost-justified. In practice, it requires agencies to be more rigorous and precise, not to give up on regulation altogether. For arsenic regulation, the EPA must determine the benefits from reducing arsenic by an incremental amount, and the cost. Benefits take the form of positive health effects such as reduced mortality; the costs involve manpower or equipment needed to strain out the arsenic. Difficult questions about how to value the reduction of the risk of death to populations had to be confronted, but economists supplied answers that are now used routinely. CBA has helped enhance the legitimacy of agency action by making clear that many regulations really do advance the public good while imposing limited costs on industry.

President Reagan’s executive order applied to ordinary agencies like the EPA and OSHA, but not to so-called independent agencies. The difference between the two is that the president may remove the leaders of ordinary agencies for any reason, whereas they may remove leaders of independent agencies only for cause. Thus, through the executive order the president could implicitly threaten to fire agency heads who refuse to comply with CBA, which is what gave the agencies an incentive to comply with the cost-benefit executive order. For the most part, judicial enforcement played no role because the statutes that authorize agency action mostly do not require agencies to comply with CBA.

Most of the financial regulators—including the SEC, the Fed, the FDIC, and the CFTC—are independent agencies and so were not required to conduct CBA under the Reagan executive order. One major financial regulator—the OCC, a major bank regulator lodged in the Treasury—is a regular agency, but it and the White House entered a memorandum of understanding excusing the OCC from the cost-benefit executive order. The reason appears to have been that the OCC must coordinate with the other financial regulators. As a matter of lawyerly caution, the OCC did produce documents that it called cost-benefit analyses when it issued regulations, but these were extremely crude exercises that bore little relation to the real thing.

Presidents since Reagan have all renewed the cost-benefit executive order, and so the EPA and the other ordinary agencies have become accustomed to using CBA, and indeed their methodologies have improved over the decades. While current cost-benefit analyses are far from perfect, they do show a considerable amount of methodological sophistication, thanks in part to the guidance and coordination of OIRA. OIRA has ensured that regulators use common valuations—like the valuation of a statistical life—and discount factors. It has drafted best practices. And it sends regulations back to agencies when they do not pass a CBA to OIRA’s satisfaction.

APPLYING CBA TO FINANCIAL REGULATION

The reasons for requiring agencies to use CBA are varied. The most obvious reason is that we want regulations that advance the public interest, and regulations that produce gains greater than losses will normally do just that. CBA also improves transparency by forcing regulators to lay bare their assumptions about the effects of regulations on the public. Consistent use of CBA makes it easy for regulated parties to plan and predict how regulators will respond to new industrial practices. CBA has also helped calm the ideological battles over whether industry should be regulated or deregulated by channeling the debate into a technocratic battle of the experts where empirical data substitute for rhetoric.

Financial regulation would seem ideal for CBA, and it is indeed surprising that CBA began with environmental regulation rather than the other way around. In the case of environmental regulation, one must contend with hard-to-value effects like those to life, health, and wildernesses. By contrast, financial regulation is mostly about money. A good financial regulation does not save hard-to-measure lives; it saves easy-to-measure dollars.

So why are financial regulators panicking? Because they lack experience with CBA, and so must master a new methodology that may trip them up, and because the academic literature does not contain a guide or protocol that they can turn to. Fortunately, economists have developed a sophisticated understanding of financial markets, and while they have not said much about financial CBA, the financial literature can be used to develop the principles of CBA for financial regulation. Furthermore, if mandates for CBA are adopted, a large consulting industry for conducting such analyses is likely to grow up in the financial area as has occurred in the environmental, health, safety, and antitrust arenas.

To help stimulate this process, we invited to our conference leading financial economists and legal academics to help us refine our ideas about how financial CBA should work. In the interest of full disclosure, we should mention that the scholars disagreed on a range of issues touching on valuations and whether certain activities are harmful or not. (We mention one below.) But nearly everyone agreed that CBA for financial regulation is sensible and makes more sense than what agencies do now. In particular, it became clear from the conference that the most productive discussions were those that came out of attempts to quantify benefits and costs of different policies rather than debates of policies on the basis of abstract ideologies.

CBA STRATEGIES

To explain how financial CBA might work, we will use minimum capital requirements as a running example. Capital requirements are rules that require banks and other financial institutions to maintain a minimum ratio of capital to assets. To use a simple example, a 5 percent capital requirement implies that a firm that buys $100 in assets cannot finance the purchase with more than $95 in debt, so that at least $5 in equity is held by investors. Real capital requirements are much more complex, allowing firms to take on more debt when assets are safe and less when they are risky, and to treat certain safe types of debt as capital for regulatory purposes. But the goal of capital requirements is straightforward: they prevent a firm from becoming overleveraged and thus minimize the risk of insolvency if asset values decline or the cost of borrowing increases. The rules are premised on the assumption that insolvency causes harm to others, which is not normally true outside the financial sector, but is true within the financial sector because of the implicit taxpayer guarantee and the risk of contagion, which can result in the sudden withdrawal of credit from the economy.

Let us start with the cost side of CBA. Banks would maintain capital reserves even if they weren’t required to; this protects them from insolvency when changes in the markets cause the value of their assets to decline by limited amounts. But because they ignore the negative externalities associated with leverage, they will maintain less capital than is socially optimal. A minimum capital regulation that requires a bank to maintain capital greater than it would otherwise maintain imposes an easily estimated cost: the opportunity cost of the resources that could otherwise be loaned out or invested. Thus, for a particular bank, the cost of the regulation is the forgone revenues from loans and investments minus the interest it would have to pay to depositors and other lenders. One can estimate the cost for each bank and sum it across all banks that are subject to the regulation.

The benefit side is more complex. The regulator must calculate the avoided expected cost of a bailout that would occur but for the regulation, or financial crisis if a bailout does not occur. The financial crisis of 2008 provides some evidence as to the cost of a bailout. The federal government used money from the Troubled Asset Relief Program to bail out a group of banks, and although the government eventually turned a profit, the loss in terms of market price at the time is the appropriate measure. This amount then must be multiplied by the risk of a financial crisis in the first place, which can be roughly calculated from historical data on financial crises collected by Carmen Reinhart and Ken Rogoff in their 2009 book, This Time Is Different. Similar data can be used to calculate the risk and cost of a financial crisis if bailouts do not take place or succeed, which (based on the data in the Reinhart and Rogoff book) can be estimated as in the neighborhood of $1–2 trillion. Thus, data can be used to calculate a value of a statistical bailout and a value of a statistical financial crisis, analogous to the value of a statistical life in health regulation. Those values can then be used by all financial regulators who issue regulations that affect the risks of these harms. We realize that some readers may regard such estimates as impossible, but we ask them to suspend disbelief until we return to this issue below.

The next benefit is reduction of capital misallocation. Capital is misallocated when resources are used on projects that do not generate positive social welfare. In a market economy, it is reasonable to presume that capital allocations are socially beneficial, but there are some well-known exceptions. As has been known among economists for decades, destructive “races” can take place when investors spend resources to be first to obtain information and trade on it—for example, by building high-speed information networks that enable them to obtain information a few microseconds before others. As far as we know, capital adequacy regulations do not directly stop destructive races, but other regulations—for example, rules that limit the number of trades that can be made over small periods of time—could, in which case this benefit should count in favor of them in cost-benefit analyses.

A third potential benefit of financial regulation is that it can reduce the amount of “gambling.” Traders often trade in order to modify the risk level in their portfolios. Trades that reduce risk (known as hedging) improve their well-being to the extent that they are risk-averse, as most people are. However, traders often speculate, in many cases using other people’s money (including taxpayers’). There is no social benefit to such speculative trading, although it is controversial whether (as we believe) it is actually socially costly or (as others believe) it is not socially costly as long as the traders consent to the gamble. More research is needed to resolve this issue, but it certainly should not count against capital regulations that they may reduce the ability of banks to engage in or facilitate through market-making such speculative trades.

There are other benefits from financial regulation, above all consumer protection. Consumers often make bad investments because they do not understand financial markets, or are afflicted by well-known cognitive biases, or both. The 2010 Dodd-Frank Act created the Consumer Financial Protection Bureau to improve consumer financial regulation. Such regulation produces benefits to the extent that it prevents consumers from making financial trades that undermine rather than advance their financial well-being. Standard techniques for valuing investments and portfolios can be used to calculate the benefits of consumer financial protection regulations.

For capital adequacy regulations, the benefits for consumer protection are probably small and the capital misallocation and gambling benefits are probably small as well. Thus, the major benefit of the regulation is its reduction of the risk of a bailout or financial crisis. The optimal strictness of the regulation—whether it should be 4 percent, or 10 percent, or higher—is the amount that maximizes the benefits over the costs.

For another example, we consider the Volcker rule, which bans proprietary trading by banks. The Dodd-Frank Act directed regulators to implement the rule, leaving them considerable discretion as to how exactly to do that.

What would a CBA of the Volcker rule look like? The major goal of the rule is to prevent financial crises from occurring or losses from proprietary trading falling on taxpayers in a bailout. A bank that takes on too much risk may collapse, resulting in a costly government bailout or, if that does not take place, an economic crisis if contagion results. Restrictions on proprietary trading would have predictable effects on banks’ bottom lines—they would lose a certain amount of profits per year, which can be estimated from historical data. The lost profits across all banks to which the rule applies would be the cost of the regulation.

The major benefit is, as in the case of capital adequacy regulations, the reduction of the risk of a bailout or financial crisis. The estimates used for capital adequacy regulations would be used for the Volcker rule as well.

Our approach suggests some other factors for regulators to consider when they calculate the costs and benefits of the rule. Profits from proprietary trading that are extremely short-term or reflect tiny gains on large-volume transactions probably reflect capital misallocation, gambling, or regulatory arbitrage, in which case the loss of such profits should not count as a “cost” in the CBA. The risk that banks will evade the rule should also be considered, although realistically the only way to address this arbitrage risk may be to update the rule as the newly developed arbitrages come to light.

Some commentators will be skeptical that the benefits we have described can be calculated. It may seem impossible to calculate the effect of a capital adequacy rule or the Volcker rule on the risk of a financial crisis, or to estimate the social cost of a financial crisis. It may seem that banks can easily evade the rules through arbitrage, as they have in the past.

There are several responses to these concerns. First, the exact same objections could have been, and were, directed at CBA of health and safety regulations. Critics argued at the time that a statistical value of life could not be calculated and was inherently arbitrary, and that because of the ambiguity of dose-response curves and other data used to estimate the effect of a regulation on human health, the benefits of regulations could not be calculated. But once regulators were forced to use CBA, an academic industry arose to address those problems. While arbitrariness has not been eliminated, the controversy about the use of CBA has abated.

An example from recent history is OIRA’s development of a social cost of carbon (SCC), to be used by all agencies that issue regulations that affect climate change. OIRA convened experts from all the relevant agencies and those experts used computer models from academia to estimate the economic impact of different levels of carbon emissions, and thus the economic cost of the emission of an additional ton of carbon. The SCC number that was derived could then be used by different agencies as they performed cost-benefit analyses within their jurisdictions. Estimating the impact of greenhouse gases on the economy over hundreds of years is a far more daunting task than estimating the impact of capital adequacy regulations. And there are many legitimate criticisms of the process that the interagency working group used. Yet, its report has stimulated additional research and refinement, and it cannot be denied that this methodological rigor is socially beneficial.

Second, there already is a large amount of experience with valuing the impact of government interventions in financial markets. Litigation over government interventions—for example, the government contracts that attempted to help financial institutions survive the savings and loan crisis of the late 1980s and early 1990s—gives rise to a demand for expert work that untangles causation and determines the magnitude of losses attributable to specific government acts. This expert work is largely invisible because it is proprietary and appears in only summary form in judicial opinions. But the methods developed by experts can easily be deployed to CBA of financial regulation.

Third, financial regulators already do make assumptions about valuations for financial crises and about the causal effect of a regulation on the risk of a financial crisis. But those assumptions are implicit and likely inconsistent. The FDIC, for example, calculates the premiums banks must pay for deposit insurance; those premiums are implicitly based on an estimate of the risk that the banks will default on loans from depositors. The Volcker rule proposed by agencies necessarily makes assumptions about the risk and cost of a financial panic if, as agencies claim, the purpose of the rule is to prevent a repeat of the 2007–2008 financial crisis. The problem is that regulators do not make explicit their assumptions, thus making it impossible for the public to challenge the assumptions as unrealistic. At a minimum, if agencies were forced to conduct cost-benefit analyses and hence be explicit about their assumptions, they would be forced to adopt consistent assumptions. This would enhance regulatory transparency and stimulate academic and public debate that would further improve regulation.

For example, such a process might reveal that the value of parameters necessary to justify the Volcker rule would require far higher capital regulations than at present. This would suggest that raising capital requirements should be the first priority and the Volcker rule should only come as a secondary measure. Or it might be found that the Volcker rule is easily justified by parameters consistent with current capital levels, in which case the Volcker rule would be an easy choice. Without clear and consistent quantitative valuation, such determination of the relative merits of policies is difficult to achieve.

Fourth, critics of CBA of financial regulation have never explained what the alternative is. We suspect that the only viable alternative, and indeed the status quo, is either what we called “intuitive balancing” above or a kind of pragmatic response to prior experience. Regulators believe that they have a rough sense of the risks and benefits of regulations of different strictness, based on their past experience. If capital adequacy rules did not stop a financial panic in the past, they should be increased, but not “too much” where “too much” probably reflects the political pushback of banks. If financial panics have not occurred recently, then maybe capital adequacy rules should be relaxed, but again not “too much.” This type of pragmatic balancing is not useless, and is better than flipping a coin, but it is just a degenerate version of CBA, which demands better methods of using data to calculate valuations based on best practices developed in academia.

Lastly, with respect to the arbitrage problem, there is no magic bullet that would enable regulators to issue regulations that will be good for all time. Regulated firms will always find new ways to arbitrage—by which we mean avoid the literal application of a rule by adjusting behavior so that it is no longer covered by the rule but nonetheless causes the harm that the rule was intended to block. And the risk of arbitrage is indeed greater in financial markets than elsewhere, like manufacturing. Financial arbitrage involves developing new and better algorithms that can be deployed almost costlessly, whereas in manufacturing arbitrage typically involves expensive actions like moving plants from one location to another or implementing new production processes.

But CBA provides a useful device for combating regulatory arbitrage. Because CBA involves a predictable method and relies on data that are generally available, firms can more easily predict how agencies will react to new activities. Agencies can respond quickly to socially harmful arbitrage by issuing interpretive guidance letters based on an extension of the CBA contained in the rules being arbitraged. Because firms can predict this reaction, they may avoid engaging in the arbitrage in the first place. Arbitrage is much easier when the “spirit” behind a rule is obscure, so it is difficult for an agency to justify an interpretive guidance letter blocking a new activity on the basis of the reasoning (if any) that underlies the original rule. By contrast, when the spirit of the rule is publicly articulated as maximizing benefits over costs, it can be easily extended to apply to new behavior that attempts to evade it along the margins.

Conclusion

As former OIRA administrator Susan Dudley recently explained in these pages, CBA is no panacea. (See “OMB’s Reported Benefits of Regulation: Too Good to Be True?” Summer 2013.) Agencies often exaggerate the benefits of regulations or underestimate their costs, and OIRA does not have the power or resources or sometimes the political or bureaucratic incentive to keep all agencies in line all the time. But it is hard to deny that the twin requirements of CBA and OIRA supervision have improved the performance of agencies compared to the pre-1981 status quo, and there is every reason to believe that the lesson will hold good for financial regulators as well.

Readings

  • “Benefit-Cost Analysis for Financial Regulation,” by Eric A. Posner and E. Glen Weyl. American Economic Review: Papers and Proceedings, Vol. 103 (2013).
  • “Benefit-Cost Paradigms in Financial Regulation,” by Eric A. Posner and E. Glen Weyl. Journal of Legal Studies, forthcoming.
  • “OMB’s Reported Benefits of Regulation: Too Good to Be True?” by Susan E. Dudley. Regulation, Vol. 36, No. 2 (Summer 2013).
  • This Time Is Different: Eight Centuries of Financial Folly, by Carmen M. Reinhart and Kenneth S. Rogoff. Princeton University Press, 2009.