“Without Roosevelt’s intervention, the economic recovery that lasted from 1933 to 1937 would have been weaker and shorter—-not unlike our own recovery after the Great Recession.” (David Weiman, “Imagining a World without the New Deal,” The Washington Post, August 12th, 2011.)
***
In the previous installment to this series, I showed how the U.S. economy owed most of the recovery it experienced between 1933 and 1937 to an onrush of “hot money” from Europe that revived aggregate spending. I also explained how that hot money owed its warmth, not to any steps FDR took in the U.S., but to the martial maneuverings of Benito Mussolini and Adolph Hitler.
In today’s post I plan to show that the 1933–37 recovery fell far short of reversing the collapse the U.S. economy suffered between 1929 and 1933, and that this disappointing outcome was the result of New Deal policies aimed at boosting wage rates. The resulting higher wage rates prevented the revival of spending from sponsoring a corresponding revival of employment.
A Disappointing Recovery
Although any sort of recovery would have been welcome after the disastrous Great Contraction, the recovery between March 1933 and May 1937 was disappointing in several respects. Most obviously, it didn’t last. Instead, a depression followed it that, though short lived, was severe enough to reverse most of the previous years’ gains. And although, as the chart below shows, industrial production eventually caught up to, and even slightly surpassed, its 1929 level, it remained far below the trend to which a full recovery would have returned it.
But the biggest disappointment was the persistence of high unemployment. Most of the gain in output was due to remarkable post-1933 gains in total factor productivity rather than increased employment. The unemployment rate as conventionally measured (with workers in government relief programs counted as unemployed) stayed obstinately above 15 percent until 1936, and was still above 10 percent in the late summer of 1937. It then rose sharply again as the new depression took hold. To put these numbers in perspective, between April and September 1929, the highest recorded unemployment rate was just above 2 percent.
Writing of the situation as of 1935, Ellis Hawley, a sympathetic critic of the New Deal, observes (pp. 131–2) that the gains achieved up to then “were hardly reasons for jubilation”:
Over ten and a half million workers were still unemployed, approximately twenty million people were still dependent upon relief, basic industries were still operating at little more than half their capacity, and the real income of the average family was still thirteen percent below that of 1929.
Why didn’t all that gold flowing in from Europe translate into large employment gains? One reason was banks’ inclination to accumulate excess reserves after the bank holiday, which reduced the inflows’ contribution to broad money growth: banks could either use new reserves to support increased lending or security purchases, or they could simply hold on to them. To the extent that they accumulated reserves, as gold reserves grew, the money stock would grow less than proportionately. As the chart below shows, for most of the period before 1936, and especially until the start of 1935, banks accumulated excess reserves almost as quickly as the Fed acquired gold. So the money stock grew at a relatively modest (but hardly trivial) rate. Total spending likewise grew less rapidly, and contributed less to recovery, than it might otherwise have.
The Wages Puzzle
Yet the extent of demand growth was hardly so limited as to account for the persistence of double-digit unemployment. By 1937, as the next chart shows, nominal GNP was almost 70 percent above its 1933 level, and just 30 percent below its 1929 level. So why didn’t employment increase correspondingly? The reason is that, instead of serving to pay going wage rates to a larger number of workers, a substantial part of the increase in spending was offset by higher wage rates: instead of using their increased earnings to hire more workers, employers were using them, or a big chunk of them, to give raises to their existing workers. The chart’s purple line, showing manufacturing workers’ hourly earnings, shows how those earnings rose sharply after mid-1933. In two years they’d risen 35 percent, and they would go on rising until the middle of the “Roosevelt Depression.”[1]
To anyone unfamiliar with the government policies of the period, this pattern of hourly wage changes should seem, not just perplexing, but perverse: how could wage rates rise sharply while more than a fifth of the labor force was out of work? With so many workers desperately seeking jobs, and most plants operating at well below capacity, why would manufacturers, confronted with the choice of either giving their workers a raise, or hiring more workers at going wage rates, if not for less, not make the second choice?
The answer isn’t to be found among the laws of supply and demand. But it can be found easily enough among the laws passed by the Roosevelt administration. As economic historian Christopher Hanes observes in a recent, excellent paper on the topic, those laws supply
an obvious explanation. Starting in 1933 the Roosevelt administration adopted “New Deal” labor policies embodied in the National Industrial Recovery Act (NIRA), which established the National Recovery Administration (NRA) codes, and the 1935 National Labor Relations Act (NLRA or Wagner Act). NRA codes boosted nominal wage rates, and imposed “labor standards” such as overtime premium pay, by regulatory fiat. Both the NIRA and the NLRA promoted formation of labor unions and strengthened workers’ bargaining power over employers. These policies could boost prices, as well as wages, as they gave an extraordinary boost to labor costs of production.
The N.I.R.A.
Although the March bank holiday brought the banking crisis to an end and ended the Great Contraction, the economy that emerged from it was in dire straits, with over a quarter of its workers unemployed. The demand for some further government action, Ellis Hawley writes, (p. 53), “was overwhelming.” The Roosevelt administration’s first response to this demand was the Agricultural Adjustment Act of May 10th, 1933. Its second, passed five weeks later, was the National Industrial Recovery Act (NIRA). The NIRA was to be the New Deal’s centerpiece. It was, FDR declared, “the most important and far-reaching legislation ever enacted by the American Congress.”
The NIRA consisted of three parts or “titles.” Title II established the PWA (Public Works Administration) and assigned it a budget of $3.3 billion. Title III contained various miscellaneous provisions, including one transferring the RFC’s public works activities to the PWA. Title I, which most concerns us, addressed “Industrial Recovery.” It provided for the establishment of industry “codes of fair competition.” Among other things, the codes were supposed to assure adequate working conditions, set maximum working hours and minimum ordinary and overtime wage rates, and affirm workers’ right to engage in collective bargaining. Businesses operating under approved codes were to be exempt from federal antitrust laws. Four days after the NIRA was passed the National Recovery Administration (NRA) was established to see to the codes’ development and enforcement.
Establishing minimum wage rates had in fact been the original aim of what became the NIRA. Like many people, FDR believed that higher wage rates would translate into an overall increase in workers’ “purchasing power,” which, by enabling them to spend more, would in turn promote recovery. He therefore asked Labor Secretary Frances Perkins to suggest amendments to Hugo Black’s “Thirty Hours” bill, a measure first introduced in 1932 that the Senate had passed in April, that would provide for minimum wage rates. At the time, Perkins says in her memoir (p. 197), FDR’s “mind was as innocent as a child’s of any such program as the NRA.” Eventually FDR decided to offer a substitute for Black’s bill instead of amending it. The result was a scheme in which industrial and trade associations would come up with codes establishing minimum wage rates and other working conditions that all firms in an industry would have to meet. Industries that abided by approved codes would be exempt from antitrust laws that would otherwise have prohibited them from colluding.
In practice the NRA worked out far differently than Perkins and other champions of higher wages intended. Although Advisory Boards featuring consumer as well as labor representatives took part in the code-writing process, industry representatives paid little heed to them. Instead they took advantage of both the code-writing opportunity and the suspension of antitrust laws to regulate product prices and output so as to turn formerly competitive (albeit imperfectly competitive) industries into so many cartels. Although the codes did provide for minimum wage rates, and although actual wage rates in some cases were set even higher than those minimums, they often were written so that, if wage rates rose, product prices would rise just as much.
“In practice,” Hawley observes (pp. 33–4), “the NRA became a mechanism that conflicting groups sought to use for their own ends, and agency that was unable to define and enforce a consistent line of policy.” The result disappointed practically everyone, including the businessmen who hoped to profit by dominating the code-writing boards. Before it was over even FDR himself couldn’t help admitting, albeit only privately (to Frances Perkins), that “the whole thing is a mess” (Robert McElvaine, The Great Depression, p. 162).
The High-Wages Fallacy
Considering that verdict, it should not be surprising to discover that William Randolph Hearst had a point when he quipped that NRA really stood for “No Recovery Allowed.” The NRA codes themselves did nothing to boost aggregate spending, which was the real key to recovery. Instead they only prevented the increased spending that did occur from boosting total employment. And so far as this outcome is concerned, the codes failed not because they veered from their original purpose, but because they did just what they were supposed to do!
Of course the misuse of industry codes to establish and enforce monopoly prices itself contributed to unemployment, because monopoly prices meant less output, which, other things equal, meant less demand for labor. But it was mainly by boosting wage rates, just as they were meant to, that the codes undermined the employment-generating capacity of increased spending, by reducing the number of jobs any given level of spending could support.
Before we explore the magnitude of the codes’ unintended consequences, I’d better explain why their champions thought they’d have just the opposite effect, and why they were so tragically mistaken.
The belief that compelling firms to increase their workers’ wage rates would promote recovery, or the “high-wage doctrine,” had its roots in the “underconsumption” theory of depression, which reached its peak of popularity during the 1920s. According to that theory, depressions happened when workers collectively weren’t paid enough to buy the output their employers produced. The problem, Hartley Withers wrote in 1914 (Poverty and Waste, p. 83, my emphasis), was that while it might be obvious enough to business owners “that the workers of other industries should be better paid,” because that would allow them to spend more, few were willing to increase their own workers’ wage rates. Instead “every company or firm would naturally wait for others to begin” (ibid., p. 84). Government action might be needed to overcome this free-rider problem.
The severe though short-lived depression of 1920–1921 led to a surge in the high-wage doctrine’s popularity, by convincing many that the sharp (perhaps 20 percent) decline in wage rates during it was not merely a symptom of a collapse in total spending, and certainly not a way of restoring employment, but an aggravating cause of the collapse. Thanks to that episode, and to William Foster and Waddill Catching’s 1927 book, Business Without a Buyer, which popularized the high-wage doctrine, by the time the Great Depression started, the doctrine was conventional wisdom. Hoover and Roosevelt may have disagreed about many things, but both of them subscribed to it. The difference between them was that, while Hoover merely tried to persuade businessmen to resist cutting their workers’ wage rates, FDR compelled firms to abide by the NRA’s codes: besides risking criminal prosecution and fines of up to $500 per violation, firms that failed to follow the codes could have their business licenses revoked—the business equivalent of the death penalty.
Had the high-wage doctrine been sound, that difference would have been a good thing. But it wasn’t. Instead, as Jason Taylor and I pointed out some years ago, the doctrine rested on the confusion of “wage rates,” meaning the amount businesses paid for any given quantity of labor, with total wages paid (or the “wage bill”), meaning the total amount businesses spent on labor. That the same term, “wages,” could refer to either of these very different things, contributed to that confusion. While it’s obviously true that, holding product prices constant, raising workers’ total wages also increases their “purchasing power,” allowing them to spend more, it doesn’t follow that compelling businesses to pay higher wage rates will have that effect, because firms might respond by scaling-back employment. What’s more, during a depression, when firms are making few if any profits, this is likely to be the only way they can afford to pay higher wage rates. In that case, the only way to have higher wage rates without sacrificing employment is to somehow see to it that total spending goes up before wage rates do so. But doing that calls for expansionary monetary or fiscal policy.
In short, although higher wage rates and increased employment can be results of increased spending, mandating higher wage rates doesn’t itself generally serve to increase spending. And if it doesn’t increase spending, it’s likely to lead to less unemployment. The graph below, in which LS is the labor supply schedule, illustrates this last point. It shows how, taking product prices as given, as wage rates rise, so does the quantity of labor supplied. LD represents firms’ demand for labor, which is a function of their earnings. N* and w* are the market-clearing level of employment and the market-clearing money wage rate, respectively. Unless LD changes, firms have only so much revenue they can devote to paying workers. Consequently, imposing a minimum wage rate, w’, above w*, causes employment to decline from N* to N’.
By the same token, when aggregate spending does grow, as it did after 1933 thanks to all that hot money, employers can take advantage of the growth to either increase wage rates or employ more workers. But the more they do the first, the less they can do the second. In the next graph, LD*, w*, and N* represent pre- Great Depression levels of labor demand, wage rates, and employment, while LD0, w0, and N0 represent their levels after the Great Contraction. After that, hot-money fueled spending growth raised the demand for labor back toward ND*. On its own, that increase would eventually have restored employment to N*, while also raising w back to w*. But imposing beforehand a mandatory minimum wage (w’ once again) limits the potential gain in employment to N’.
Keynes on the NIRA
Because underconsumptionist theories superficially resemble Keynes’s theory attributing depressions to a lack of spending, some have been tempted to treat Title 1 of the NIRA as an application of Keynes’s ideas. Frances Perkins, for one, claimed that by setting minimum wage rates, the NIRA “constituted an effective demonstration of the theories which John Maynard Keynes had been preaching and urging.” But that was hardly Keynes’s own view. In his famous December 1933 open letter to FDR in the New York Times, he criticized the theory that boosting wage rates (among other “prime costs”) would raise workers’ purchasing power in much the same terms as I have:
Now there are indications that two technical fallacies may have affected the policy of your administration. The first relates to the part played in recovery by rising prices. Rising prices are to be welcomed because they are usually a symptom of rising output and employment. When more purchasing power is spent, one expects rising output at rising prices. [Therefore] it is essential to ensure that the recovery shall not be held back by the insufficiency of the supply of money to support the increased monetary turn-over. But there is much less to be said in favour of rising prices, if they are brought about at the expense of rising output. Some debtors may be helped, but the national recovery as a whole will be retarded. Thus rising prices caused by deliberately increasing prime costs or by restricting output have a vastly inferior value to rising prices which are the natural result of an increase in the nation’s purchasing power.
Keynes went on to say that, while he approved of the NIRA’s attempts to achieve a more equal distribution of incomes “in principle,” he also considered it a mistake to treat higher prices as an end in itself: “The stimulation of output by increasing aggregate purchasing power is the right way to get prices up; and not the other way round.” And the most reliable way to get purchasing power up was through “governmental expenditure which is financed by Loans and not by taxing present incomes,” that is, by deficit spending:
The set-back which American recovery experienced this autumn was the predictable consequence of the failure of your administration to organise any material increase in new Loan expenditure during your first six months of office.
Keynes wrote so, it bears recalling, a month before the dollar was devalued, and well before “hot money” started to pour into the U.S. from Europe, supplying the basis for an increase in U.S. consumers’ purchasing power far greater than that achieved by FDR’s fiscal policies.
Code Blues
So much for the theory. But just how well do the facts fit it? Very well, actually.
First, concerning the codes, these were of two sorts: approved NIRA codes developed for and by representatives of specific industries, and a “blanket code,” adopted in late July, 1933, that firms had to abide by until their industries’ NIRA codes were approved. The vast majority of employers—Henry Ford was a notorious exception—agreed to take part in the program, earning the right to display the NRA’s blue eagle attesting to the fact that they were “doing their part” to end the depression.
All of the codes prevented wage rate cuts and set minimum wage rates; but they also tended to raise the wage rates of non-minimum wage workers, while raising rates for all workers indirectly by mandating across-the-board cuts in individual employees’ weekly hours, while forbidding cuts in their total pay. “From June 1933 to June 1934,” Frances Perkins writes (p. 208), “the average hourly earnings in manufacturing increased thirty-one per cent. The downward spiral of hourly earnings was checked and an upward spiral was set in motion.”
But what about employment? According to Jason Taylor’s estimates, it was only by virtue of the blanket code’s “work sharing” provision that the codes succeeded at all in scaling-back unemployment. Although it compelled firms to spread work among more workers, the blanket code actually caused aggregate hours worked to decline by over 9 percent. Yet that was nothing compared to what the NIRA codes did: once firms adopted them, they tended to hire less labor. Other things equal (again, according to Taylor’s estimates), although the blanket code by itself would have created 1.34 million new jobs, albeit only by work sharing, the NIRA codes would have destroyed almost as many. Together both sets of codes might have reduced workers’ aggregate hours by as much as one third—a calamitous figure. Fortunately, Taylor observes, other things weren’t equal. Instead, fiscal and (especially) monetary stimulus “offset much, but not all, of the negative effect that cartelization and high wage rates had on aggregate hours worked.”
After carefully studying the timing and extent of post-1933 wage rate increases beyond those consistent with underlying growth in spending, Christopher Hanes concludes that between them, the NRA codes and increased union activity (encouraged first by the NRA, and then by the Wagner Act), offer “a plausible and complete explanation” for them (emphasis in original).
The Wagner Act
Hanes also shows that the timing of the period’s “anomalous” wage rate movements is not consistent with the alternative hypothesis, proposed in a 2008 paper by Gauti Eggertsson, that wage rates rose in response to policy changes, such as the dollar’s devaluation, that “raised the expected long-run future price level.”
Eggertsson’s view is especially hard to square with what took place between November 1936 and October 1937. During that time, both the Fed and the Treasury adopted anti-inflation policies that ought to have reduced the public’s future price level expectations. Yet wage rates rose substantially above levels consistent with current spending. Here’s a chart showing Hanes’s hourly earnings series (note that it differs somewhat from the FRED series shown in a previous chart) and unemployment. The vertical lines mark the dates of Fed reserve-requirement increases (black) and the start of the Treasury’s gold sterilization program (gold):
The NIRA also can’t explain those wage hikes: it ceased to exist, except as a minor bureau devoted to publishing economic reports, when the Supreme Court declared it unconstitutional on May 27th, 1935. However, the Wagner or National Labor Relations Act passed two months later does explain them. Besides continuing many of the NIRA’s labor provisions, the Wagner Act and the NLRB (National Labor Relations Board), its enforcement agency, substantially strengthened workers’ right to form unions and engage in strikes. Thanks mainly to them, Harold Cole and Lee Ohanian report, both the share of unionized workers and the number of strikes they held doubled between 1935 and 1939. And because the NLRB was able for a time to force them to rehire workers they’d fired for taking part in sit-down strikes, those strikes tended to be very effective.[2]
Hanes shows that the nominal wage rate hikes of 1936–7 coincided perfectly with “a wave of union organization and strikes” that followed the passage of the Wagner Act. It took the “Roosevelt Depression” to put a stop to them, which it did only by setting output and unemployment back to where they’d been just before the dollar was devalued. I plan to discuss the causes of that sad episode in a later post.
Great Expectations?
In his influential 2008 paper, “Great Expectations and the End of the Great Depression,” Gauti Eggertsson uses a New Keynesian modeling framework to argue that various New Deal policies, such as the bank holiday and devaluation of the dollar, helped to promote recovery not simply by boosting prices and wages but by boosting the expected rate of inflation. By doing so, Eggertsson argues, those policies eased monetary policy by lowering the real policy rate, even or especially when the nominal policy rate was at its “zero lower bound.” Elsewhere, Eggertsson observes that the opposite could happen as well, as it did in 1937, following the Treasury’s late-December 1936 decision to start sterilizing gold inflows.
But if strikes or NRA codes are effective in boosting nominal wage rates, why shouldn’t they themselves have boosted inflation expectations, thereby lowering real interest rates and stimulating employment and output?
In a 2012 paper, “Was the New Deal Contractionary?” Eggertsson claims that strikes and wage codes did in fact promote recovery this way. Codes and strikes could, he suggests, be just as effective in boosting inflation expectations as expansionary monetary or fiscal policies. And, no matter what policy achieves them, “higher inflation expectations decrease real interest rates and thereby stimulate demand. Expectations of similar policy in the future increase demand further by increasing expectations about future income.” Thus Eggertsson arrives at the seemingly paradoxical conclusion that “policies that reduce the natural level of output [can] increase actual output.”
It’s as clever a defense of the NIRA and NLRB as has yet been put forward—clever enough to have won over some very smart economists, including Miles Kimball. But with all due respect to them and to Eggertsson himself, I believe Eggertsson here pushes his compelling 2008 argument a step too far. In particular, I believe that his attempt to treat NRA codes and such supply-restricting policy measures as having consequences similar to those of demand-expanding ones rests upon the false premise that any policy that raises the present and expected future level of prices also raises “expectations about future income.”
In a response to Kimball, Scott Sumner hits the nail on the head: Why, he asks, isn’t it always safe to assume that higher inflation expectations are expansionary? “The simple answer,” he says,
is that looking at inflation expectations is reasoning from a price change. Thus if inflation rises due to a positive AD shock (more NGDP) then both prices and output will rise in the short run. But if NGDP is unchanged, then higher inflation leads to lower output. That’s why supply shocks are contractionary.
The New Keynesian models are simply wrong. The role played by inflation expectations in NK models should be played by NGDP growth expectations. Switching to NGDP allows us to avoid reasoning from a price change, and avoiding mistakes like the claim that adverse supply shocks can be expansionary.
Companies do not care about inflation expectations; they care about total revenue expectations—which are tied in to NGDP growth.
Workers, by the same token, don’t necessarily care about wage rate expectations; they care about their expected future earnings, which are also tied to NGDP (aggregate spending) growth. When wage rates are expected to go up only because codes or strikes force them up, a rational, representative member of the labor force has no reason to expect to earn more. That’s one possible outcome, to be sure. But so is a higher probability of being unemployed. In the final analysis, Eggertsson’s clever argument turns out to be but a high-tech version of that highly-destructive old fallacy, the high-wage doctrine. In the original, workers only had to insist on higher wage rates, regardless of the state of aggregate spending, to make themselves better off. In Eggertsson’s version, they need only anticipate higher future wage rates to do so.
Alternatively, one might fault Eggertsson for treating the natural (“efficient”) interest rate as varying in response to what he calls a “preference shock” only, and not in response to changes to the actual and expected natural level of output. In contrast, it is generally understood (p. 12 and surrounding) that “if the average household becomes more optimistic regarding future [real income] growth rates, this will lead to an increase in the natural interest rate,” and that if households expect real income growth to decline, the natural real interest rate will decline as well.
But if a leftward-shift in aggregate supply, stemming from the establishment of NIRA codes, is allowed to both raise the expected inflation rate and lower the natural (or efficient) real rate of interest, one can no longer conclude that the codes will serve to reduce the gap between the efficient real rate and the real policy rate. It follows that Eggertsson’s story would only make sense if a representative household in the 1930s was rational enough to revise its inflation expectations upwards in response to the passage of the NIRA, but not rational enough to reduce its anticipated real earnings by a corresponding amount.
Continue Reading The New Deal and Recovery:
- Intro
- Part 1: The Record
- Part 2: Inventing the New Deal
- Part 3: The Fiscal Stimulus Myth
- Part 4: FDR’s Fed
- Part 5: The Banking Crises
- Part 6: The National Banking Holiday
- Part 7: FDR and Gold
- Part 8: The NRA
_______________________
[1] As Christopher Hanes explains (p. 21), because “hourly earnings” can change as a result of increased overtime pay or workers shifting from lower to higher wage-rate jobs, movements in that measure give only an approximate indication of movements in average hourly wage rates. Unfortunately the only wage rate index available for the Great Depression period stops in 1935. Up to that point, however, the two measures do appear to move together.
[2] In their highly influential paper, Cole and Ohanian develop a general equilibrium model of the bargaining process between firms and workers. They then derive predicted paths for real wage rates for both cartel and competitive versions of the model, and conclude that the cartel version accounts well for the actual pattern of New Deal wage-rate changes. So far this is all consistent with the general thrust of this post. However, an important difference between my own perspective and Cole and Ohanian’s is that while they blame New Deal programs for boosting real wage rates, I hold that they mainly did harm by raising nominal wage rates: for any given level of aggregate demand and corresponding revenues, the number of labor hours firms can afford declines as hourly wage rates increase, regardless of how output prices behave. And, as Barro and Grossman explained back in 1971, it is in fact perfectly possible to have high unemployment despite the fact that real wage rates are at their competitive general equilibrium levels, provided that nominal wages rates and prices are both set at excessively high levels.