Oil Markets

“The Changing Face of World Oil Markets,” by James D. Hamilton. July 2014. NBER #20355.

Crude oil prices, as measured by the U.S. Energy Information Administration’s domestic first‐​purchase price, increased from an average low of $15.37 a barrel in 1970 (in inflation‐​adjusted second‐​quarter 2014 dollars) to an average high of $71.48 in 1981. By 1988, prices had decreased to $22.06, and by 1998 to $14.94—lower than the price in 1970. By 2008, the price had risen to $101.15. As I write this in October 2014, the price is around $82.

Will oil prices ever return to their 1998 and 1970 levels? James Hamilton of the University of California, San Diego argues no. The current historically high price level is the result of an extraordinary increase in demand for oil from the developing world and stagnating conventional supply.

Developing countries accounted for one‐​third of the world’s oil consumption in 1980, but they consume 55 percent now. And their consumption growth since 2005 has been way above their 1980–2005 trend; China alone accounts for 57 percent of the global increase in consumption since 2005.

Consumption trends in the developed world have been the opposite. During the low‐​price era from 1984 to 2005, oil consumption in the United States, Canada, Japan, and Europe grew linearly at an annual rate of 400,000 barrels a day. Since 2005, consumption has fallen at an annual rate of 700,000 barrels a day. Oil consumption in the developed world was 8 million barrels per day (mbd) less at the end of 2012 than one would have predicted from the 1984–2005 trend.

That reduction was the result of higher oil prices and not simply a fall in demand because of the Great Recession. Since 2009, U.S. gross domestic product has grown at about the same rate it did before the recession, but U.S. oil consumption continues to decline.

Crude oil production is accompanied by natural gas. The aggregate oil production data include “natural gas liquids” or NGL (these are ethane and propane, which are not liquids at normal pressures and temperatures, but are called liquids because they can be liquefied at low cost relative to methane). In 2005, NGL made up only 9 percent of total oil supply; since 2005, they have made up 29 percent of the increase in total supply. That is important because, though NGL adds to total supply, it cannot be easily converted to gasoline, jet fuel, or diesel fuel. Thus, there is an increasing mismatch between the characteristics of crude oil at the margin and its primary use as transportation fuel. This is reflected in the price of natural gas versus crude oil; we pay four times as much per British Thermal Unit for crude oil as for natural gas.

Not only is the composition of oil increasingly mismatched with transportation market needs, but the total production of crude oil is relatively stagnant. If one linearly extrapolates the pre‐​2005 production trend, one would project that oil production would have grown by 8.7 mbd between 2005 and 2013, but it only grew 2.2 mbd over that time.

Why has world oil production not increased? Libya, Iran, Iraq, and Nigeria have been affected by regime instability or international sanctions. Saudi production has been almost flat since 2005, averaging 9.6 mbd in 2005, 9.7 mbd in 2013, and 9.8 mbd so far in 2014. Major international companies have had increasing capital expenditures and decreasing production since 2005. U.S. production in the “Lower 48” states from conventional production is 5.5 mbd lower in 2013 than in 1970. This decline has been partially offset by Alaska and offshore production, but those sources peaked in 1988 and 2003, respectively. Recent oil shale production, which is 2.9 mbd more than in 2005, has offset the 0.6 mbd decline in conventional production in the Lower 48. More importantly, the net 2.3 mbd increase in production from oil shale in the United States is the entire increase for the world since 2005. Thus, the current historically high price of oil is not likely to decrease dramatically because of the apparent limits on conventional sources and the tremendous increase in developing world demand.

Payment Card Regulation

“The Impact of the U.S. Debit Card Interchange Fee Regulation on Consumer Welfare: An Event Study Analysis,” by David S. Evans, Howard Chang, and Steven Joyce. October 2013. SSRN #2342593.

“Regulating Consumer Financial Products: Evidence from Credit Cards,” by Sumit Agarwal, Souphala Chomsisengphet, Neale Mahoney, and Johannes Stroebel. September 2013. NBER #19484.

In payment card markets, banks create gains to trade between consumers and firms by facilitating transactions in which cards substitute for cash or checks. Banks have to decide how much to charge consumers and how much to charge merchants for that service. Because cash and checks have very low or zero marginal costs for consumers, banks have concluded that, to induce consumers to switch to cards from the other two payment forms, most of the costs of payment cards must be placed on merchants rather than consumers.

Merchants reacted to this politically by seeking the aid of Congress in reducing their charges. The Durbin Amendment to the 2010 Dodd‐​Frank Act instructed the Federal Reserve to issue rules limiting the level of debit card interchange fees to the costs of authorization, clearing, and settling debit card transactions, thus eliminating the fees as a source of profits for banks. In December 2010, the Fed proposed a rule that reduced charges a surprisingly large amount, to 12 cents per transaction from the approximately 44 cents unregulated rate. The final rule in June 2011, which gave unexpected relief to the banks, limited the charges to 24 cents. Still, in 2012, banks received an estimated $7.3 billion less in debit card processing revenue because of the fee reduction.

How were the benefits and costs of this fee reduction distributed? David Evans et al. use the initial surprise of the 12‐​cent fee proposal and the subsequent surprise of the much higher 24‐​cent limit as the basis for designing a study to examine how bank and retailer stock values changed. They conduct a traditional stock price event study in which changes in the value of stocks of banks and retailers relative to all other stocks just after the surprise events are attributed to those events.

The event study consisted of the 66 largest publicly traded retail firms in the United States and 57 publicly traded financial firms with the largest debit card transaction volume. The draft rule created a capitalization gain of between $2.5 and $5.3 billion for retailers and capitalization loss of $9.7–$10.8 billion for banks. The final rule resulted in retailers losing $6.2–$8.6 billion and banks gaining $9.4–$11.2 billion. Scaling the results up so they represent all banks and merchants, the net effect of the final rule was to increase merchant profits by $38.1–$41.1 billion and decrease bank profits by $15.9–$16.4 billion. The decrease in bank profits is exceeded by the increase in merchant profits. Thus consumers are worse off by the $22–$25 billion difference because retailers are expected by investors to keep more of the interchange fee reduction in profits than banks are expected to take as losses.

Consumer groups and some economists, particularly those whose research falls into the “behavioral” rather than “neoclassical” tradition, favor regulation of financial transactions to aid unsophisticated consumers in their dealings with banks. They especially have in mind fees and other charges that are less than transparent and require diligence to understand. Banks and more traditional economists argue there is no free lunch and attempts to regulate fees will result in increased consumer costs in other unregulated dimensions. For government regulation to be effective, credit card markets must not be fully competitive and consumers must not be equally responsive to different types of charges and fees.

Sumit Agarwal et al. examine how credit card charges reacted to the Credit CARD Act of 2009. The act, relying on behavioral economics thinking, requires that consumers be notified and explicitly approve transactions over their credit limit, including notification of resulting extra fees. The status quo before the act allowed consumers to opt for a simple transaction denial once their credit limit was reached—an option that few consumers exercised. The CARD Act also requires that credit limit exceedance fees occur only once in a billing cycle rather than for each transaction, because many consumers did not realize they had exceeded their limit until they received their account statements and observed hundreds of dollars in extra fees. Finally, the act requires monthly statements to contain explicit information about how long it would take to pay off a balance if only the minimum payment were made and how large the payment would have to be to pay off the balance in 36 months.

Agarwal et al. studied the effects of the CARD Act on the “near universe” of credit card accounts of the eight largest banks. Before the provisions of the act took effect (from April 2008 to January 2010), consumers as a group paid 21.9 percent in interest payments and fees, cost the bank 15.6 percent in charge‐​offs, and generated a net bank profit of 1.6 percent. Consumers with FICO scores—a measure of creditworthiness—lower than 620 (considered a bad score) paid 43.9 percent per dollar borrowed in interest and fees and generated net profits of 7.9 percent.

The regulations to limit fees had large effects on the behavior of consumers with low credit scores. Late fees and over‐​limit fees dropped by 2.8 percent of borrowing volume ($744 billion in 2010), or $20.8 billion. For those with FICO scores lower than 620, fees dropped from 23 percent to 9 percent of average daily balance. The authors found no change in interest rates or credit limits in response.

Because interest rates were declining in general during this period, some have argued that no change in interest rate charges to consumers is evidence that banks did “increase” interest rates relative to the counterfactual lowering that did not occur. To test this argument, the authors conducted a difference‐​in‐​differences analysis comparing interest rate changes for those with low and high FICO scores. They found no difference even though the low FICO score accounts generated much less fee revenue while the high FICO score accounts had little change in revenue generation. Thus, one would predict interest rates would have increased on the low FICO score accounts or decreased on the high FICO score accounts.

The authors find that the regulation that mandated information about length of time to full balance repayment had little effect. The number of accounts repaying at a rate that would extinguish balances within 36 months increased by a mere 0.5 percentage points.

Risk Retention by Mortgage Securitizers

“Qualified Residential Mortgages and Default Risk,” by Ioannis Floros and Joshua T. White. August 2014. SSRN #2480579.

In late October, federal regulators issued final rules required by the Dodd‐​Frank Act defining the characteristics of mortgages that are deemed risky enough to require the originator to retain at least a 5 percent stake in the mortgages. The original proposed rule in 2011 defined a “Qualified Residential Mortgage” (QRM), which would not require risk retention by the originator, as having at least a 20 percent down payment. The proposed rule exempted mortgages with an explicit government guarantee, including those sold to Fannie Mae and Freddie Mac as long as they remained under government conservatorship.

Liberal community housing groups, mortgage bankers, and home builders lobbied extensively for the last three years against the down payment requirements. The final adopted rule contained no down payment requirement and exempted mortgages sold to Fannie and Freddie as long as the firms remain under federal conservatorship with explicit government backing.

The reaction to the final rule from commentators has been negative. Barney Frank, former chairman of the House Financial Services Committee, said, “The loophole has eaten the rule, and there is no residential mortgage risk retention.”

How important are down payments and other characteristics in predicting mortgage delinquency? Ioannis Floros and Joshua White examine the performance through the end of 2012 of private label (non–government agency) loans securitized from 1997 to 2009 (about 2.7 million loans). The percentage of loans that became seriously delinquent (defined as 90 days or more in arrears or in foreclosure) in the entire sample is 44.6 percent, ranging from a low of 13.7 percent in 1998 to a high of 57.8 percent in 2006. That compares to a “serious delinquent” (SDQ) rate of only 5.3 percent over the same time period for agency loans. Higher FICO scores have lower delinquency, but 26.8 percent of SDQ loans have a score greater than or equal to 720 (considered a good score). If one restricts the sample to qualified mortgages (QM—those exclude loans that are negative‐​amortization, interest‐​only, involve balloon payments, or require no income or asset documentation), the SDQ rate decreases from 44.6 percent to 33.8 percent. If one excludes loans with a FICO score below 690 and a loan‐​to‐​value ratio of greater than 90 percent (less than 10 percent down), then the SDQ rate drops from 33.8 percent to 10.7 percent. Thus, loans meeting the QM definition plus only two additional components of the proposed 2011 QRM definition (FICO above 690 and 10 percent down payment) would be more than four times less likely to be SDQ.

Down payments and FICO scores predict bad loans. So why were those provisions not included in the final definition of QRM, while the QM rules (negative amortization, interest only, balloon payments, and documentation) that were not very predictive were included? The simple explanation is that the more stringent rules would have shut down the private market. Members of Congress made it clear that they did not want that to happen. Less than 2 percent of the loans in the data had loan‐​to‐​value ratios of less than 80 percent (at least 20 percent down) and FICO scores above 690.

Risk Analysis

“Pricing Lives for Corporate Risk Decisions,” by W. Kip Viscusi. September 2014. SSRN #2491735.

In 2014, General Motors was fined $35 million by the National Highway Traffic Safety Administration (NHTSA), the maximum allowed under the law, for failure to report safety problems related to ignition switches. Those problems were associated with 13 fatalities. The consent decree released by NHTSA also revealed that GM had no internal systematic discussion of risk versus cost in the design of the switch.

For Vanderbilt economist Kip Viscusi, that fine is too low. NHTSA is permitted a fine of only $7,000 per violation and the total fine for a related series of violations is limited to $35 million. The value of a statistical life (VSL) used by the U.S. Department of Transportation (in which NHTSA exists administratively) to govern its decisions on the cost effectiveness of regulatory rules is $9.1 million. Thus, the 13 lives lost have an aggregate value of $118 million, which should have been the amount GM was fined. The estimated cost of the GM recall was about $100 million in 2007. If GM decisionmakers had faced the prospect of a $118 million fine or a $100 million recall, they would have chosen the recall to minimize the company’s costs.

GM had no internal discussion of the costs and benefits of risk reduction because explicit discussions by auto companies in the past (e.g., Ford’s infamous decision to adopt a less costly gasoline tank design for its Pinto subcompact) led to vilification by the press as well as punitive judgments by juries. To be sure, Ford’s Pinto discussions were flawed because they used only lost earnings ($40,000 a year for 40 years would equal only $1.6 million) as a measure of the price of a life rather than the much higher VSL. If they had used the higher VSL, they probably would have concluded the safer design was cost effective.

Juries do not confront the ex ante choice of expenditures to save statistical lives. Instead, they confront the loss of an explicit life versus a trivial expenditure per car rather than the aggregate expenditure for an entire model run. In the case of the Pinto gasoline tank fires, the jury weighed the $11 extra expenditure per car versus the loss of a life. This framing effect makes juries very sympathetic to plaintiffs and very unsympathetic to companies that engage in explicit risk reduction tradeoff discussions.

Viscusi has conducted studies of jury behavior using random samples of ordinary people given various VSL estimates who then vote on damage awards in hypothetical cases. In the hypothetical cases, if a company conducted explicit cost‐​benefit analysis, the jury award was higher. Viscusi proposes that there be a safe harbor for any corporate analyses conducted correctly using the DOT VSL number.

Financial Market Regulation

“Cost‐​Benefit Analysis of Financial Regulation: Case Studies and Implications,” by John C. Coates IV. January 2014. SSRN #2375396.

“Towards Better Cost‐​Benefit Analysis: An Essay on Regulatory Management,” by John C. Coates IV. July 2014. SSRN #2471682.

In 2011, the D.C. Circuit Court of Appeals struck down a Securities and Exchange Commission regulation because the agency failed to provide an adequate cost‐​benefit analysis (CBA) for the regulation. In Congress, Sens. Mike Crapo (R‑Idaho) and Richard Shelby (R‑Ala.) have introduced a bill that would require the financial regulators to conduct CBA of all future proposed regulations.

In a previous issue of Regulation, University of Chicago professors Eric Posner and Glen Weyl argue that the time has come to require CBA by the independent financial regulatory agencies (“The Case for Cost‐​Benefit Analysis of Financial Regulation,” Winter 2013–2014).

In an earlier issue, Richard Zerbe and two of his doctoral students at the University of Washington examined CBA as conducted by an actual government agency (“Benefit‐​Cost Analysis in the Chehalis Basin,” Summer 2013). He concluded that

the results of bureaucratic [CBA] reflect costs and benefits that are readily countable, rather than a careful consideration of economic standing or economically significant cost or benefit flows. … Bureaucratic [CBA] tends to find positive net benefits for a given alternative when conducted or commissioned by project supporters, and negative net benefits when conducted or commissioned by project detractors. Both positions may be supported by legitimate bodies of credible evidence.

John Coates IV has written two papers in the same vein as Zerbe, arguing that the proponents of CBA of financial regulations (CBA/FR) have oversold its capabilities. In the first paper, he conducts a hypothetical CBA of the monetary policy known as the “Taylor rule,” proposed by Stanford economics professor John Taylor. The rule would set the federal funds rate at 1 + 1.5 × the inflation rate + 0.5 × the “output gap,” defined as the percentage deviation of actual GDP from “potential” GDP. Coates argues that the capacity of anyone to conduct qualified CBA with any real precision or confidence does not exist for important financial regulations like a Taylor rule. The data are not available and analysis often involves the use of contested macroeconomic analysis. He even quotes Taylor himself at a 2013 congressional hearing as saying “while discretion [by the Federal Reserve] would be constrained [by the rule], it would not be eliminated.” Coates rhetorically asks how would anyone evaluate such a regulation?

In the second paper, Coates argues that it “is CBA supporters themselves who need to show that CBA is anything different than judgment in drag.” “Any guestimates that emerge from superficial CBA/FR will only reflect crude assumptions based on the prior judgmental beliefs (i.e., theoretical guesses, informed by experience and ideology) of researchers about the value of regulation.”

He spends the remainder of the paper suggesting how we should encourage regulators to engage in meaningful “conceptual” CBA (that is, do not regulate unless real market failures exist and be careful about reducing competition, etc.) to encourage the development of actual quantitative CBA. His recommendations include giving agencies deference and restricting court review to those cases in which an agency is expanding its jurisdiction (or at least some people think it is) and using “bad” CBA to cover up that fact; appoint more economically literate regulatory commissioners so that the staff know that CBA is really important; allow CBA to be released by staff without commissioner approval, like inspector general reports; and use the equivalent of clinical trials to develop true knowledge about effects.

Some of these recommendations have as much “assume‐​a‐​can‐​opener” feel to them as the CBA recommendations Coates criticizes. In the end, I am reminded of Bill Niskanen’s thoughts on the struggle over the proper role of economically informed analysis in policy decisions (“More Lonely Numbers,” Fall 2003), which are worth repeating: “If lawmakers want more or better regulatory analysis, then [such analysis] would be valuable…. But it is not at all evident that Congress wants better regulatory analysis.”