Of course I didn’t expect my recent post, listing “Ten Things Every Economist Should Know about the Gold Standard,” to stop economists from repeating the same old misinformation. So I’m not surprised to find two of them, from the New York Fed, repeating recently some of the very myths that I would have liked to lay to rest.
The subject of James Narron and Don Morgan’s August 7th Liberty Street Economics post is the California gold rush. After describing the discovery at Sutter’s mill and the “stampede” of prospectors anxious to get their hands on part of the “vast quantities of gold” whose existence that discovery had revealed, Narron and Morgan observe that the
large gold discovery functioned like a monetary easing by a central bank, with more gold chasing the same amount of goods and services. The increase in spending ultimately led to higher prices because nothing real had changed except the availability of a shiny yellow metal.
No economist worthy of the name would deny that, other things being equal, under a gold standard more gold means higher prices. But other things evidently weren’t equal in the U.S. in the late 1840s and early 1850s, for if they had been the path taken by the U.S. CPI between 1830 and 1880 would not have looked as it does in the chart shown below, which was also in my above-mentioned post:
*Graphing Various Historical Economic Series,” MeasuringWorth, 2015.
As you can see, the gold rush didn’t even cause a blip in the CPI, which was about as stable from 1840 to 1860 as it has ever been. Indeed, prices fell slightly, making for an annual inflation rate of minus .19 percent. For the shorter period of 1845 to 1860 the inflation rate is, admittedly, much higher: a whopping .63 percent. But even this higher rate is, according to the Fed’s current credo, was dangerously low. Were one to assume that a 2 percent inflation rate was as desirable 167 years ago as Fed officials claim it to be today, one would have to conclude that the gold rush, far from having made the U.S. money stock grow too rapidly, didn’t suffice to make it grow rapidly enough.
Having left their readers with a quite false impression regarding the inflationary effects of the gold rush, the New York Fed economists go on to claim that “the gold standard led to more volatile short-term prices (including bouts of pernicious deflation) and more volatile real economic activity (because a gold standard limits the government’s discretion to offset aggregate demand shock [sic]).”
Here again, a little more attention to both the statistics themselves and the economic forces underlying them, casts doubt upon the Fed experts’ conclusions.
It is, first of all, notorious that early macroeconomic statistics tend to be based on smaller samples, and ones that lean more heavily on relatively volatile components, than modern ones. Christina Romer documented this fact with respect to early real GNP estimates, but the same goes for early price-level measures. Consequently it is more than likely that at least some of the gold standard’s apparent short-run price level volatility is nothing more than a statistical artifact.
Second, and more fundamentally, the authors’ implicit premise — that an ideal monetary standard avoids short-run price level volatility — is false. What’s desirable isn’t that the price level not fluctuate, or that it only fluctuate within narrow limits, but that it should fluctuate only to the extent that is needed to reflect corresponding changes in the general scarcity of final goods. In other words, the price level ought to vary in response to shocks to “aggregate supply,” but not because of shocks to total spending or “aggregate demand,” which an ideal monetary system will prevent.
A sharp rise in prices connected to a drought-induced harvest failure, for example, isn’t the same thing as one caused by a surplus of exchange media. The rise supplies a desirable signal of underlying real economic conditions. Far from making anyone better off, a monetary system that kept prices from rising under the circumstances would have to do so by reducing the flow of spending, which would only mean adding the hardship of tight money to the damage done by the drought itself.
As numerous studies (including several that I, Bill Lastrapes, and Larry White cite in “Has the Fed Been a Failure?”) have shown, harvest failures and other sorts of aggregate supply shocks were a relatively much more important cause of macroeconomic volatility during the gold standard era than they have been in more recent times. It follows that one would expect both the price level and real output to have varied more during the gold standard days than they do now even if, instead of having been governed by a gold standard, the money supply back then had been regulated by a responsible central bank. As a matter of fact, according to a fairly recent study by Gabriel Fagan, James Lothian, and Paul McNelis, had a Taylor Rule been in effect during the gold standard period, it would not have resulted in any welfare gain.
Just as there are good reasons for allowing adverse supply shocks to be reflected in higher prices, so too are there good reasons for allowing the price level to decline in response to positive supply innovations. Those reasons can be found both in my writings defending a “productivity norm” and in arguments by Scott Sumner and others for targeting NGDP.
Consideration of these arguments brings me to Narron and Morgan’s claim that the gold standard was responsible for “bouts of pernicious deflation.” That the gold standard did bring periods of deflation no one would deny. But it doesn’t follow that those deflationary episodes, or most of them, were “pernicious.” In fact, Michael Bordo, whom Narron and Morgan give as the source for their claim, has himself denied that “pernicious” deflation was a frequent occurrence under the classical gold standard. According to the abstract to Bordo’s paper, “Good versus Bad Deflation: Lessons from the Gold Standard Era,” written with John Landon Lane and Angela Redish,
the deflation of the late nineteenth century reflected both positive aggregate supply shocks and negative money supply shocks. However, the negative money supply shocks had little effect on output. This we posit is because the aggregate supply curve was very steep in the short run during this period. This contrasts greatly with the deflation experience during the Great Depression. Thus our empirical evidence suggests that deflation in the nineteenth century was primarily good.
Several other recent studies reach broadly similar conclusions, including at least a brief research note from another Federal Reserve economist.[1]
To say that deflation can be either “good” or “bad,” depending on whether it stems from goods becoming more abundant or from money becoming more scarce, and to observe that, under the gold standard, deflation was mostly good, isn’t to deny that there’s such a thing as bad deflation. But if it’s striking examples of bad deflation that one seeks, one will find them, not by peering back into the days before the Fed’s establishment, but by looking no further back than the Coolidge recession of 1920–21, or the Great Contraction of 1930–33, or the Roosevelt Recession of 1937–8. Heck, instead of even going back that far, one could just consider the subprime deflation of 2008–9. According to the linked sources, in each of these instances, deflation was to some considerable extent an avoidable consequences of the Fed’s deliberately-chosen policies, rather than something beyond the Fed’s control.[2]
Besides exaggerating both the inflationary and the deflationary risks posed by the classical gold standard, Narron and Morgan repeat the myth that a gold standard costs considerably more than a fiat standard:
Apart from their macroeconomic disadvantages, gold standards are also expensive; Milton Friedman estimated the cost of mining the gold to maintain a gold standard for the United States in 1960 at 2.5 percent of GDP ($442 billion in today’s terms).
What Friedman’s calculation actual showed was, not that “gold standards” are quite expensive, but that one very peculiar sort of gold standard is so, namely, a “pure” gold standard arrangement in which gold coins alone serve as exchange media, without the help of any fractionally-backed substitutes! Not a single one of modern history’s actual “gold standards” ever even came close to Friedman’s fictional case. (Even mid-17th century goldsmith-banks are said to have kept specie reserves equal to about a third of their “running cash” liabilities.) If one assumes that banks in 1960 would have required 10 percent gold reserves, one arrives at a gold-standard cost estimate of .25 percent of GDP; if one assumes (still more plausibly) that 2 percent reserves would have sufficed, one arrives at an estimate one-fiftieth as large as Friedman’s! When, oh when!, will economists stop taking Milton Friedman’s absurd 2.5 percent estimate seriously?
Yet correcting Friedman’s estimate is only part of the story. All sorts of other things are wrong with the claim that the gold standard was expensive. Those interested in a quick summary may consult item # 2 of my “Ten Things” post. I will only add here that even Friedman himself came to doubt that fiat money was a bargain.
Narron and Morgan conclude their article thus:
Despite the demonstrable disadvantages of a gold standard, some observers still call for the Unites States to return to a classical gold standard. Should we? Let us know what you think?
What I think is that, if the gold standard really does have “demonstrable disadvantages,” Messrs. Narron and Morgan haven’t managed to put a finger on any of them.
________________________________
[1]See also Atkeson and Kehoe, Borio et al., and Beckworth.
[2]The U.S. did, of course, experience several less-severe cases of “bad” deflation during the classical gold standard era. But these episodes resulted, not from the ordinary working of the gold standard, but from financial crises that were peculiar consequences of misguided U.S. banking and currency laws.