Nuclear Power

  • “Sources of Cost Overrun in Nuclear Power Plant Construction Call for a New Approach to Engineering Design,” by Philip Eash‐​Gates, Magdalena M. Klemun, Goksin Kavlak, et al. Joule 4(11): 2348–2373 (November 2020).

In theory, low‐​cost nuclear power has been the answer to many energy and environmental policy questions ever since the 1950s. In practice, its costs have increased inexorably. Why? This paper documents the history of nuclear power plant construction in the United States and the increase in costs. From 1967 through 1978, 107 were built. Rather than costs going down over that time from learning by doing, plant costs more than doubled with each doubling of cumulative U.S. capacity. The problem was declining “materials deployment productivity” — that is, the amount of concrete and steel that workers put together per unit of time.

About 30% of the productivity reduction stems from nuclear regulatory safety concerns. According to the authors:

While our analysis identifies the rebar density in reinforced concrete as the most influential variable for cost decrease, changes to the amount and composition of containment concrete are constrained by safety regulations, most notably the requirement for containment structures to withstand commercial aircraft impacts. New plant designs with underground (embedded) reactors could allow for thinner containment walls. However, these designs are still under development and pose the risk of high excavation costs in areas or at sites with low productivity.

The sources of the other 70% of productivity slowdown were construction management and workflow issues, including lack of material and tool availability, overcrowded work areas, and scheduling conflicts between crews of different trades. Craft laborers, for example, were unproductive during 75% of scheduled working hours.

Plant builders attempted to address these problems by increasing the use of standardized prefabricated modules that could be shipped to site and installed. These were employed in later reactors, but whatever advantages they provided did not improve aggregate productivity.

Since the 1990s, two nuclear projects have begun construction, both two‐​reactor expansions of existing generating stations. The VC Summer project in South Carolina was abandoned in 2017 with sunk costs of $9 billion, and the Vogtle project in Georgia is severely delayed. Current estimates place the total price of the Vogtle expansion at $25 billion, almost twice as high as the initial estimate of $14 billion, and costs are anticipated to rise further.

These problems are not unique to the United States. Projects in Finland and France have also experienced cost escalation, cost overrun, and schedule delays, as I noted in a previous Working Papers column (Spring 2012).

This paper provides an important reality check for those who believe nuclear power is an essential component of any strategy to reduce greenhouse gas emissions. — Peter Van Doren

Banking Fees

  • “Who Pays the Price? Overdraft Fee Ceilings and the Unbanked,” by Jennifer L. Dlugosz, Brian T. Melzer, and Donald P. Morgan. Working paper, November 2020.

Nearly 25% of low‐​income U.S. households are unbanked. Many observers believe that overdraft fees are the cause. In 2015, banks collected nearly $12 billion in overdraft and bounced check fees, constituting nearly two‐​thirds of their deposit account fees. Sen. Corey Booker (D‑NJ) has introduced legislation to limit such charges.

This paper examines a “natural experiment” in which a 2001 federal regulation relaxed the state‐​level overdraft fee limits that previously constrained nationally chartered banks in four states. During the late‐​1990s and early 2000s, Alaska, Illinois, Missouri, and Tennessee imposed caps on the overdraft and bounced check fees that banks could charge their residents. In 2001, the Office of the Comptroller of the Currency (OCC) issued a ruling that federal law preempted those state laws in the case of nationally chartered banks, which had roughly 50% deposit market share in the affected states.

In response to the OCC ruling, nationally chartered banks in those states increased their overdraft fees by about 10% but cut their returned check rates by 10% relative to other states. National banks increased the supply of overdraft credit once the fee limits were relaxed.

National banks also decreased the minimum deposit balance required to avoid a monthly maintenance fee in interest‐​bearing checking accounts in those states by between $376 and $538 from an average minimum balance of $1,345. That is a decline of 28%-40%. There was no change in minimum required balances on non‐​interest checking accounts nor any change in monthly account maintenance fees. As a result, checking account ownership by low‐​income households rose by 4.8 percentage points in the affected states relative to other states following the preemption ruling, a more than 10% increase from a 44% baseline ownership rate. Those households above the bottom quintile in income showed no change in bank account ownership.

A relaxation in price controls increased the use of checking accounts among low‐​income households. — P.V.D.

Research on the Minimum Wage and Employment

  • “Myth or Measurement: What Does the New Minimum Wage Research Say about Minimum Wages and Job Loss in the United States?” by David Neumark and Peter Shirley. NBER Working Paper no. 28388, January 2021.

There is a sense — abetted by the media — that economics is divided on the effect of a minimum wage on employment. It’s certainly an impression that many economists have tried to foster. For instance, an oft‐​cited 2015 survey by the University of Chicago’s Initiative on Global Markets found that one‐​fourth of economists surveyed (mostly elite ones with some political experience) reject the idea that a minimum wage increase would reduce jobs, while another 40% said its effect is uncertain.

In 1994, David Card and Alan Krueger published “Minimum Wages and Employment: A Case Study of the Fast‐​Food Industry in New Jersey and Pennsylvania” (American Economic Review 84[4]: 772–793). It purported to show that a minimum wage increase in New Jersey resulted in higher employment in chain fast‐​food restaurants as compared to the Pennsylvania communities directly across the border where the minimum wage was unchanged. The article triggered a flurry of new studies hoping to replicate its results.

David Neumark of the University of California, Irvine and William Wascher of the Federal Reserve Board of Governors reviewed Card and Krueger’s data (“Employment Effects of Minimum and Subminimum Wages: Panel Data on State Minimum Wage Laws,” Industrial and Labor Relations Review 46[1]: 55–81 [1992]) and found myriad data coding errors in Card and Krueger’s research that Neumark and Wascher’s own work suggested drove most of Card and Krueger’s counterintuitive results. Neumark and Wascher’s analysis using corrected data found that the New Jersey minimum wage increase did indeed reduce employment.

The competing papers cleaved the labor economist community, with liberal‐​leaning members embracing Card and Krueger’s work and conservative‐​leaning ones supporting Neumark and Wascher’s. This left the impression to many that the question is unsettled.

Since those studies appeared, there have been two increases in the federal minimum wage and a number of states and municipalities have raised their own minimum wage. The changes over time and differences across states create enough variation to ostensibly discern the effect an increase has on employment levels, and researchers have produced a raft of such analyses. Neumark has collected these studies and attempted an “analysis of analyses.”

There are two competing economic views that can cause economists to support the minimum wage. The first is to hold the perspective that increasing the wage does reduce employment, but only slightly. In this view, demand for unskilled labor is quite inelastic; it’s difficult to replace these occupations with capital or do without them altogether. Economists in this camp argue that the societal benefits from wage increases for workers who keep their jobs after the increase more than outweigh the societal costs of those few who lose their jobs because of the increase.

The second view is that a minimum wage increase does not affect employment at all. This contention is at odds with neoclassical economics unless we assume that many minimum wage workers are in monopsonistic labor markets, where there is only one buyer of labor. Such labor markets were not uncommon a century ago, when company towns still existed and a community’s entire workforce was beholden to a single employer. However, it’s one thing to assert that these labor market conditions persist and another thing altogether to identify them. The (limited) research on this tends to look at labor markets by industry instead of community, a stratification that makes little sense when applied to low‐​skilled workers who presumably possess no industry‐​specific skills and tend to focus their job search in their own community.

Because a monopsonist employer must raise wages for everyone a little bit to entice the marginal worker to take a job (unless the employer has the power to wage‐​discriminate between existing and new workers), the marginal cost of that worker is much higher than his wage. Hence, the monopsonistic employer pays wages below the socially optimal rate and employs less than the socially optimal number of workers. A minimum wage would make the firm a price taker regarding wages, effectively lowering the marginal cost of labor. In that situation, a minimum wage would serve to increase employment.

Many commentators have apparently concluded that the U.S. economy contains many monopsonistic labor markets. Freelance journalist Noah Smith recently blogged his own summary of results of recent minimum wage research, which he argues support the notion that monopsonistic labor markets mean that a minimum wage increase’s effect on employment would be exceedingly modest (“Why $15 Minimum Wage Is Fairly Safe,” Noahpinion (blog), January 15, 2021).

But is that really what the recent empirical research indicates? In this working paper, Neumark and Peter Shirley, a policy analyst for the West Virginia Legislature who earned his economics Ph.D. under Neumark, present an analysis akin to a meta‐​analysis showing no evidence of a resurgent monopsony. Their careful work, conducted using the data from the original studies, suggests that labor demand curves do indeed bend downward, meaning that a minimum wage increase will reduce employment. They conclude that these job losses are especially pronounced for teens and unskilled workers.

Nearly 80% of the studies in their analysis have a negative elasticity of labor demand, and almost half of those are significant at the 95% confidence level. For low‐​wage industries, those proportions are slightly smaller, with two‐​thirds of studies finding a negative demand elasticity and one‐​third being negative and significant at the 95% confidence level.

In essence, the talk about there being myriad monopsonies redeveloping in recent years makes little sense. The wealth of new studies on the minimum wage are far from showing this issue is unsettled; in fact, they show that a higher minimum wage does indeed reduce employment. Whether that loss is outweighed by the benefits of boosting the wages of others can be an open debate, but pretending that job losses do not happen is at odds with most research. — Ike Brannon

Special Purpose Acquisition Companies

  • “A Sober Look at SPACS,” by Michael Klausner, Michael Ohlrogge, and Emily Ruan. SSRN Working Paper no. 3720919, January 2021.

Investors have complained for some time that the mechanism behind initial public offerings (IPOs) does not appear to achieve an optimal outcome. It usually begins with an investment bank canvassing big investors to get a rough sense of what they might pay for the new company’s stock, to establish a price range. The bank then attempts to steer the company to select an offer price at the lower end of that range.

The investment bank’s intent is to create a scenario in which the price “pops” as soon as it hits the market. The initial jump in price helps the investment bank’s reputation with investors and it also makes the bank some money, as the bank typically receives some of the stock as payment, and garners attention for the firm. However, the steep initial jump in price comes at a cost: if the original price had been set at the higher price, the company would have received more capital.

What’s more, the cost of doing an IPO is not cheap: underwriting fees can be as much as 7% of the value of the company.

Alternatives to the standard IPO process have been tried. Google famously did a reverse auction for its initial offering, but it was widely viewed as having been a disappointment because it raised less money than the company hoped. Some attributed the outcome to its investment banks, none of which wanted the status quo upended. Few reverse‐​auction IPOs have been attempted since then.

More recently, the music app Spotify did a direct listing, whereby it simply listed its outstanding shares on the market without an underwritten offering. The perceived drawback of this is that existing shareholders are not required to hold onto their shares for a certain period — unlike in most IPOs — so the fear is that without that backstop the shares could tank. That, in turn, dampens demand and the new shares raise less money. While investors watched Spotify’s direct listing carefully, it failed to spur many imitators.

Another alternative to the IPO gained popularity in 2020: the Special Purpose Acquisition Company (SPAC).

A SPAC is a “blank check” shell company created specifically to obtain a private firm and take it public. Investors buy shares in the shell company at a price fixed at $10, and the principals park the money raised while they search for a promising firm to acquire. Once the principals have identified a target, they inform the shareholders, who vote on whether to approve the acquisition. Shareholders who don’t approve can get their original investment back with interest. If the process isn’t completed in two years, all shareholders get their investment returned.

The popularity of SPACs exploded in 2020, when companies executed 165 of them, compared to just 59 in 2019. And that popularity is growing: Bloomberg estimates that in January 2021, alone, investors initiated 90 SPACs.

The rise in their use has led some to declare that the SPAC is a superior tool for taking startups public and that it may someday achieve what the reverse auction and private listing couldn’t and supplant the IPO.

However, the popularity of SPACs may prove fleeting, the authors of this paper argue. The problem, they aver, is that SPACs may make a lot of money for the principals who found them, but the investors who buy stock in them often end up not doing very well.

While a SPAC may be simpler to execute than an IPO, it does not appear to be superior for investors. In their analysis of 47 SPACs that emerged between January 2019 and June 2020, the authors found that the average stock price was well below the starting price. The authors attribute this to dilution of the stock, which consists of the sponsor’s “promote,” or disproportionate share of profits, underwriting fees, and the post‐​merger SPAC warrants. SPACS also have significant costs, but the authors say they are hidden, penalizing those who don’t redeem immediately.

And while stock prices for the SPACS in the data set tend to fall in the first year after the transaction is completed, the authors find that most of the initial investors exited the stock shortly after the acquisition was completed. Investors who bought into the SPAC after acquisition bore the brunt of the losses. Such an outcome is not a sustainable long‐​run equilibrium, the authors suggest.

SPAC stock prices tend to fall in the first year after the transaction is completed, but most of the original investors exit the stock shortly after the acquisition.

They conclude that the benefits of SPACS appear to be overstated, for several reasons. There is little evidence that smaller investors are, in fact, participating in SPACS. The cost of executing a SPAC does not appear to be appreciably less than a regular IPO. And the post‐​SPAC‐​closure drop in price suggests to them that savvy investors may soon begin avoiding them.

In the two decades since the dot‐​com bubble burst, we have seen Congress pass the Sarbanes‐​Oxley Act of 2002 and the Dodd‐​Frank Act of 2010 to give regulators more tools with which to govern financial markets. Many people have written about how these have increased the cost of doing an IPO. There have been economists predicting a continuing diminution of U.S. IPOs for two decades. In fact, in the last year a large number of companies have gone public, many of which have chosen to do so via SPACS.

The authors conclude that the SPAC does not offer a less expensive route to going public, although it may be easier to accomplish. As the market digests the fact that SPACS tend to lag the market post‐​acquisition because of the higher costs, future SPAC transactions may be forced to become more efficient — although that hasn’t yet occurred in the IPO market.

But, the authors note, the higher cost of a SPAC is not endemic to the structure, and this could be addressed if the market forced it to.

This paper sheds light on a new phenomenon. It merits a rapid update that performs the same analysis for all SPACS that occurred in 2019 to see if the explosion of the phenomenon saw an increasing difference between the post‐​SPAC stock performance and the broader market indices, and whether the dilution problem was lessened. — I.B.

Securities Regulation

  • Quantifying the High‐​Frequency Trading ‘Arms Race’: A Simple New Methodology and Estimates,” by Matteo Aquilina, Eric Budish, and Peter O’Neill. SSRN Working Paper no. 3636323, June 2020.
  • “Innovation in the Stock Market and Alternative Trading Systems,” by Gabriel Rauterberg. SSRN Working Paper no. 3728768, December 2020.

No development in financial markets currently causes more discussion and disagreement than high‐​frequency trading (HFT). Forty years ago, the “making” of a market in equities was done by “specialists” who owned seats on exchanges. They were compensated by the “spread” — the difference between the price they offered sellers and charged buyers. Those differences were large enough to more than cover costs. The excess profits were capitalized in the prices that specialists paid for the right to trade on an exchange.

Now liquidity is provided by traders using computers. In a previous Working Papers column (Winter 2013–2014) I reported that many commentators view this change positively because the costs of trading have been dramatically reduced along with the rents to specialists. Bid‐​ask spreads have decreased over time and revenues to market‐​makers have decreased from 1.46% of traded face value in 1980 to just 0.11% in 2006 and 0.03% in 2015. And HFT reduces stock price volatility. When the temporary ban on short sales of financial stocks existed in 2008, the financial stocks with the biggest decline in HFT had the biggest increase in volatility.

Those who emphasize the costs of HFT focus on an “arms race” among HFT participants to locate their servers closer and closer to the servers of electronic exchanges. This arms race exists because the transfer of buy and sell offers from any of the actual computerized exchanges to the National Market System (NMS) takes real time. This creates the possibility of learning about prices at a computerized exchange and trading on that information through the NMS before the NMS posts the information. Traders have responded to these facts by paying to locate their servers in the same location as exchange servers, utilizing the speed of light to arbitrage price differences at the level of thousandths of a second (latency arbitrage).

In a previous Working Papers column (Fall 2015) I described research that demonstrated that the arms race is the result of exchanges’ use of a “continuous‐​limit‐​order‐​book” design (that is, orders are taken continuously and executed in the order of arrival). In a continuous auction market, someone is always first. In contrast, in a “frequent batch” auction (in which trades are executed by auction at stipulated times that can be as little as a fraction of a second apart), the advantage of incremental speed improvements disappears. To end the latency arbitrage arms race, the 2015 paper proposed that exchanges switch to batch auctions conducted every 10th of a second.

What are the costs of the arms race? If multiple participants are engaged in a speed race, some will succeed and some will fail. But conventional limit order book data don’t have a record of the losers. The first paper in this review relies on the simple insight that computer messages to trading computers that fail to result in trades provide a complete record of speed‐​sensitive trading.

The authors obtained from the London Stock Exchange all message activity for all stocks in the Financial Times Stock Exchange (FTSE) 350 index for a nine‐​week period in the fall of 2015. The messages are time‐​stamped with accuracy to the microsecond (one‐​millionth of a second).

Their main results:

  • The average FTSE 100 symbol has 537 latency‐​arbitrage races per day. That is about one race per minute per symbol.
  • Races are fast. In the modal race, the winner beats the first loser by just 5–10 microseconds, or 0.000005 to 0.000010 seconds.
  • For the FTSE 100 index, about 22% of daily trading volume is in races.

The “latency arbitrage tax,” defined as latency arbitrage profits divided by all trading volume, is about 0.5 basis points (0.005%). The average bid‐​ask spread (the effective charge for liquidity provision) in the data is just over 3 basis points. Thus, the latency arbitrage tax is about 17% (0.5– 3). If liquidity providers did not have to bear the adverse selection costs of losing races, the cost of trading would be reduced by 17%. This amounts to $5 billion per year across all global equity markets.

Will exchanges adopt batch trading? The conventional answer is no because they make most of their revenue from charges for the co‐​location of traders’ servers with the exchange servers to allow faster trades. But the second paper in this review argues that an important component of the explanation for the lack of adoption of batch trading by exchanges is Securities and Exchange Commission regulatory requirements for exchanges.

The main cost an exchange would face in adopting frequent batched auctions would be the cost of obtaining SEC approval. And once that approval is won, at some significant cost, other participants can free ride on the approval without incurring the legal cost by adopting the same structure themselves.

In addition, the design of a batch auction cannot be patented. As a result, the only way for an exchange to make money through a batch design would be a trade secret, but under SEC regulations exchanges must disclose everything for public comment and subsequent SEC approval.

Alternative Trading Systems (ATSs) are not exchanges and are subject to much less disclosure and regulation than exchanges. ATSs can change their rules more quickly and innovations can remain trade secrets. ATSs just file a form with no notice‐​and‐​comment period and responses.

How expensive are the regulatory differences? When Intercontinental Exchange (ICE), the owner of the New York Stock Exchange, purchased the Chicago Stock Exchange, Chicago had trivial volume. ICE was purchasing “an exchange license” for $70 million, which suggests the costs of the formal SEC exchange application rules are quite substantial. When the ATS Investors Exchange (IEX) applied to become a stock exchange in the fall of 2015 and conduct its trading with a batch design, its application was followed by months of disputes even though its proposed market structure was basically identical to its operation as an ATS. SEC approval took nine months. Two ATSs, OneChronos and IntelligentCross, have adopted batch trading while IEX has not had much success as an exchange. — P.V.D.

CAFE Standards

  • “Who Values Future Energy Savings? Evidence from American Drivers,” by Arik Levinson and Lutz Sager. NBER Working Paper no. 28219, December 2020.

The original rationale for Corporate Average Fuel Economy (CAFE) standards is that consumers supposedly fail to correctly appreciate the tradeoffs between the higher initial expense of a vehicle with better fuel economy and the subsequent savings in fuel costs over the lifetime of the vehicle. Regulation has discussed the evidence for this rationale several times (see “Do Consumers Value Fuel Economy?” Winter 2005–2006, as well as Working Paper columns for Winter 2015–2016, Spring 2017, and Spring 2018). The evidence suggests that, in the aggregate, consumers trade off higher initial costs and subsequent fuel savings correctly.

This paper examines data at the individual level using U.S. National Household Travel Survey (NHTS) data from 2009 and 2017. Instead of asking whether the price premium for a hybrid Toyota Camry, for example, is justified given the average annual mileage such cars are driven, the authors examine the actual annual miles driven and calculate the gasoline expenses of each participant in the survey.

Fuel economy is only one of many attributes that consumers consider in their purchase decision. The authors use a clever research design to separate the effect of fuel economy from other vehicle attributes: They examine the 24,362 drivers of vehicles available in either a gas or hybrid version, 2,337 of which are hybrids. For each driver, they calculate the annual fuel cost difference discounted at 7% over a 14‐​year vehicle lifetime. There are 2,430 who would save money in hybrids, and nearly precisely that many, 2,337, actually drive hybrids. But they are mostly the wrong 2,337 drivers. Of the 2,430 for whom a hybrid would save money (given actual gasoline prices and given actual miles driven), only 12% (286) actually drive hybrids. The remaining drivers drive gas cars and drive enough miles annually that they would save money with a hybrid over the vehicle’s life.

More drivers (21,932) would save money in the gas cars. Of those, 91% choose correctly the gas versions. The other 2,051 drive hybrids, overinvesting in fuel efficiency.

A second research design examines all vehicles in the survey data and statistically controls for attributes other than fuel economy to estimate the initial extra cost of a 1‑mile‐​per‐​gallon increase in fuel economy. The authors find that cost is between $110 and $340. Stated differently, the average incremental cost of purchasing a car that travels 1 additional mpg (say from 25 to 26 mpg or 33 to 34 mpg) ranges from $110 to $340.

The benefit from that higher initial expense is the reduced use of fuel over the vehicle’s lifetime. Given gasoline prices faced by consumers, the average annual savings from 1 extra mpg in vehicle fuel economy is $56. Earned annually and discounted at 3% for 14 years and summed, that’s $618. Thus, on average consumers could spend $100-$340 initially to save $618.

If the lower price for an extra mpg is used ($110), then 142,924 (92%) of the 155,572 drivers in the sample bought cars with too little fuel economy. The other 12,648 drivers overinvested in fuel savings. The 142,924 drivers who underinvested incur $6,000 more in average driving costs over 14 years relative to cars with the optimal level of fuel economy, whereas the 12,648 who overinvested lose only $400.

If future fuel savings are discounted at 7% and the high estimate of the fuel economy price ($340 per mpg) is used, the two types of mistakes are about equally likely: 53% of survey participants underinvest and 47% overinvest. The average driver who underinvested incurs an extra $2,400 in total lifetime driving costs, whereas the average driver who overinvested loses about $1,300.

The interesting policy insight from these results is that CAFE, with its emphasis on increasing aggregate fuel economy, seems to still leave most drivers with the “wrong” vehicle from a fuel economy perspective. In the 2017 NHTS sample, a fleet‐​wide increase in fuel economy of 1 mpg would save the average driver about $50 per year. Merely swapping cars among drivers to increase high‐​mileage cars’ efficiency by 1 mpg (by placing them in the possession of drivers who drive more miles) would save an average of $30 per driver per year. In other words, better sorting at constant technologies achieves 60% of the savings from a fleet‐​wide fuel economy improvement. — P.V.D.