The unanimous decision of the Federal Open Market Committee (FOMC) to shift from inflation targeting to average inflation targeting is another step away from its mandate to achieve long-run price stability. Section 2A of the Federal Reserve Act does not say the Fed’s long-run objective should be 2 percent inflation. It calls for maintaining the growth of money and credit to “promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates.” The legal basis for price stability has not changed, but the Fed’s interpretation of that responsibility has drifted, so that “price stability” now means an increase in the price level (P) that averages 2 percent over time, with the proviso that the average inflation target (AIT) must be “flexible.”
Read the rest of this post →
Cato at Liberty
Cato at Liberty
Email Signup
Sign up to have blog posts delivered straight to your inbox!
Topics
Monetary Policy
Two Sorts of Average Inflation Targeting
It occurs to me that recent discussions of the Fed’s new average inflation targeting plan gloss over a subtle distinction between two different kinds of Average Inflation Targeting (AIT). Hence this post explaining the difference, and why I think it matters.
The difference between the two sorts of AIT that I have in mind is subtle, so pay close attention! It hinges not on any different central bank objectives or reaction function parameters or that sort of thing, but on two different reasons why a central bank might find that it has veered from its inflation target in the first place. A central bank may fail to hit its target because the authorities fail to correctly anticipate upcoming changes in various price level determinants. Call such misses “unexpected target deviations.” Or it may fail because, although its forecasts are correct, circumstances prevent it from adjusting its stance as needed, given its forecast, to keep the price level on target. Call these “expected target deviations.” The “zero lower bound” (ZLB) problem is the most conspicuous example of a circumstance that could lead to expected target deviations. Allowing that unconventional policies are either impractical or inadequate, a central bank stuck at the ZLB may know perfectly well that it’s about to undershoot its inflation target, without being able to avoid doing so.
Read the rest of this post →Related Tags
Postal Savings: A Third-Class Remedy?
This essay, the first of a series on government efforts to bank the “unbanked,” reviews the history of postal banking and its deleterious role in the Great Depression. Subsequent posts will discuss contemporary proposals for government involvement in retail banking, such as through the U.S. Postal Service and the Federal Reserve, in light of this experience.
Some 8.4 million U.S. households (6.5 percent of the total) have no bank account. For a modern economy, that’s a high number. Some experts and politicians believe that the best way to lower it would be to get the U.S. Postal Service to offer bank accounts, with many claiming that America’s experience of postal savings between 1911 and 1966 shows how a similar system could help bank today’s “unbanked.”
Superficial evidence might seem to corroborate their case. Immigrants and minorities make up a disproportionate share of the unbanked, and a recent study found immigrants were particularly heavy users of postal savings in its early years. But most contemporary accounts, especially those that claim postal savings as a model for the present, fail to give due consideration to how a poor legislative design made it a force for ill in the toughest years of the Great Depression. Far from strengthening the argument for post-office banking today, the postal savings experience is a cautionary tale against government participation in activities that have historically been the remit of commercial banks.
Read the rest of this post →Related Tags
Fall 2020 Issue of the Cato Journal: Monetary Highlights
![The Fall 2020 Cato Journal.](/sites/cato.org/files/styles/pubs_2x/public/wp-content/uploads/2020/09/cj2.png?itok=mXgUc4GM)
The Fed’s use of unconventional monetary policy has greatly expanded its balance sheet and engaged the Fed in credit policy that is ripe with fiscal implications. Esther George, president and CEO of the Federal Reserve Bank of Kansas City, explores some of those implications in her tribute to Marvin Goodfriend, who sadly passed away last December. She concludes that Goodfriend was “keenly focused on preserving the central bank’s integrity and independence”—a view that “undoubtedly would serve us well today.”
Since the global financial crisis in 2008, negative interest rates have taken the policy world by storm. Most notably, they have been a popular point of contention between Federal Reserve Chairman Jerome Powell and President Donald Trump. To shed light on some of the issues at stake, Swedish economists Fredrik N. G. Andersson and Lars Jonung, professors at Lund University, analyze the Riksbank’s experience with negative interest rates from 2015 to 2019. In doing so, they draw lessons for other central banks that are considering turning to negative rates to boost inflation and spur their economies. With respect to the United States, they conclude: “Evidence from Sweden suggests that negative policy rates in the United States would lead to a rapid increase in housing prices, greater demand pressure, and a depreciating dollar, with only minor effects on consumer inflation rates.”
In addition to negative interest rates, the global financial crisis also spawned an ongoing discussion of using financial transaction taxes (FTTs) to stabilize financial markets. Diego Zuluaga, associate director of Cato’s Financial Regulation Studies, considers the arguments for and against FTTs. He concludes that they typically raise little revenue and increase the cost of capital, thereby distorting financial markets.
Finally, I argue that the classical gold standard, if properly understood, can help inform monetary policy. It is not, as many claim, “a nutty idea.” Rather than relying on central bank guidance, the pre-1914 gold standard, which defined the dollar as a physical quantity of gold, anchored the long-run price level and stabilized exchange rates. It also discouraged resort to debt monetization. Whether one favors a gold standard or not, it is a mistake to dismiss the value of those attributes.
Abstracts of these four articles follow. Jump directly to the Cato Journal to read them and seven other excellent articles covering issues unrelated to monetary policy.
Perspectives on Balance Sheet and Credit Policies: A Tribute to Marvin Goodfriend
By Esther L. George
Marvin Goodfriend (1950–2019) was keenly focused on preserving the central bank’s integrity and independence. I share several concerns with Marvin. To the extent that large-scale asset purchases succeeded in their aim of creating a wealth effect, they also played some role in contributing to elevated asset valuations. These effects, together with the perception that interest rates will remain at historically low levels for a prolonged period, can lead to a buildup of financial imbalances that ultimately pose risks to the real economy. Another concern I share with Marvin is the risk that income from the Fed’s large balance sheet combined with our capital surplus could tempt fiscal authorities to view the Fed as a source of funding for government programs. Central bank independence requires that a bright line exist between monetary and fiscal policy. Marvin therefore proposed that the 1951 Treasury-Federal Reserve Accord on monetary policy be supplemented with a Treasury-Fed Accord on credit policy.
Lessons from the Swedish Experience with Negative Central Bank Rates
By Fredrik N. G. Andersson and Lars Jonung
Negative interest rates were once seen as impossible outside the realm of economic theory. However, several central banks have recently adopted negative policy rates. The Federal Reserve is coming under increasing pressure to follow suit in the wake of the coronavirus crisis. This paper investigates the actual effects of negative interest rates using the Swedish experience from 2015 to 2019. The Swedish Riksbank was one of the first central banks to introduce a negative interest rate in 2015 and the first central bank to abandon a negative rate in 2019. We find that negative rates had a modest effect on consumer price inflation due to globalization, but significant effects on the exchange rate and domestic asset prices, thus fostering financial imbalances. We conclude by discussing the implications of our results for larger economies such as the United States. Our view is that the lesson from Sweden is clear: a negative central bank policy rate is not a panacea.
Financial Transactions Taxes: Inaccessible and Expensive
By Diego Zuluaga
Financial transactions taxes (FTTs) have become a plank of the Democratic policy platform, part of a global resurgence of FTTs since the 2008 financial crisis. While politicians argue for them as a way to raise revenue, economists regard FTTs as inefficient for that purpose because of the behavioral changes they cause. Instead, John Maynard Keynes and James Tobin, among others, proposed FTTs precisely to discourage transactions and thereby reduce the supposedly destabilizing effects of “excessive” trading. But evidence from around the world shows FTTs’ impact on market efficiency to be ambiguous, as they sometimes increase volatility and hamper price discovery. FTTs also raise very little revenue because listed firms and traders move to other jurisdictions in response to the tax. Imposing an FTT in the United States would be particularly harmful during the present crisis, as it would raise the cost of capital for firms struggling to adapt to the post-COVID-19 environment.
How the Classical Gold Standard Can Inform Monetary Policy
By James A. Dorn
The operation of the classical gold standard offers many lessons for policymakers, including ones concerning the consequences of a credible commitment to a rules-based monetary regime and to enforceable private contracts under a just rule of law. These lessons don’t mean we should necessarily return to a gold standard, but they do suggest that we should not dismiss the gold standard as a “nutty idea.” The gold standard should be understood as one approach for attaining monetary stability. It is a rules-based system that brings about long-run price stability via market forces and the free flow of gold. It is also a monetary regime that is consistent with individual freedom and the rule of law. The real gold standard ended in 1914.
Related Tags
Is the Fed Getting Warmer?
Relax: the Fed isn’t about to catch fire or melt. This isn’t about that sort of warming. It’s about a different, more benign sort of Fed warming that my pals at the Mercatus Center claim to have discerned. Still, I don’t believe them. Call me a Fed warming skeptic if you like, but so far as I’m concerned, it’s all fake news.
Since Jay Powell announced the Fed’s new average inflation targeting (AIT) strategy last week, both Scott Sumner and David Beckworth have welcomed it as a step, albeit only a tenuous one, toward their own (and my) preferred policy of NGDP level targeting. Scott calls “average inflation targeting…a tiny step forward,” though one that will allow the Fed more discretion than a move to price-level targeting would. David likewise observes that, although it isn’t quite an NGDP level target, AIT “is a step in that direction.”
Read the rest of this post →Related Tags
The Fed’s New Policy Rule: Taylor v. Semi-Wicksell
![average inflation targeting: a semi-Wicksellian effort](/sites/cato.org/files/styles/pubs_2x/public/wp-content/uploads/2020/09/Screen-Shot-2020-09-02-at-12.21.08-PM-e1599063853420.png?itok=2N0jY66F)
The new policy rule nominally retains the Fed’s 2012 inflation target of 2 percent. However, it changes how the Fed will react both to shortfalls of inflation from its target level, as well as to deviations of employment from its “maximum level.” It turns out that both these changes will likely bias inflation to exceed the Fed’s professed target.
Inflation Shortfall Offsets
Under a simple Taylor Rule, the Fed attempts to control near-term inflation by manipulating short-term interest rates, with no reference to past inflation except as it enters a proxy for the public’s inflationary expectations. It usually misses its inflation target one way or the other, either because it has mis-estimated the neutral real interest rate r*, mis-judged the public’s inflationary expectations, or just because of random micro-shocks to the economy. All of this is to say that even medium-run average inflation will ordinarily be randomly above or below its 2 percent target.
The Fed’s new plan is to try to reduce these long-run misses by attempting to offset past inflation shortfalls with deliberate excess future inflation. For example, if inflation has been averaging only 1 percent for the past few years, it might temporarily target 3 percent inflation in order to bring long-run average inflation, and therefore inflationary expectations, more quickly in line with its 2 percent long-run target.
Neither the FOMC Statement nor Powell’s speech mentions whether these corrections will be symmetrical: If inflation has been running 3 percent or even 6 percent for a few years, will the Fed then temporarily target 1 percent inflation or even 2 percent deflation in order to bring the long-run average closer to its long-run target? One suspects that the Fed will offset the shortfalls but not the overages, with the result that under the new rule, inflation will in fact systematically average more than the announced 2 percent target over the long-run.
The Wicksell Rule
Back in 1909, the Swedish economist Knut Wicksell proposed a rule that has attracted considerable academic interest of late and is somewhat similar to the Fed’s new policy. Like Taylor, Wicksell would manipulate inflation with tight or easy money as evidenced by the stance of short-term interest rates. However, Wicksell’s proposal was to target the price level (P) itself rather than the inflation rate. Wicksell had in mind a constant price target with zero inflation, but his rule could apply equally well to a price level trajectory that increased at a constant rate, say 2 percent per year, from a selected constant base date, say January 2000. If the actual price level (P) was below its target trajectory, his rule would have the central bank temporarily target more than 2 percent inflation until P was back on track. Symmetrically, if actual P was above the target trajectory, the central bank would temporarily target less than 2 percent until P was back on track. If, as Wicksell had in mind, the target trajectory was a constant, the central bank would have to temporarily target deflation until P was back on target.
A rigorous Wicksell Rule (with or without uptrend) has the theoretical advantage that under it, the price level will be what econometricians call trend stationary, in that it will tend to follow the target trajectory closely, with finite long-run variance, whereas under the Taylor rule, the accumulated policy errors make the price level non-stationary, so that the future price level will eventually drift arbitrarily far from its starting point plus target inflation.
Under the Taylor Rule, any mis-estimation of r* on the Fed’s part will cause inflation to average above or below the Fed’s target, in addition to the accumulated policy errors. Under the Wicksell rule, on the other hand, mis-estimation of r* will cause the price level to lie on average above or below the target trajectory, but long-run average inflation will still equal the target implied by the trajectory, with zero variance, and the long-run price level forecast error will have finite variance.
Unlike the Wicksell Rule, however, the Fed’s new rule is not based on a fixed reference date in the past, but rather allows it to average past inflation over an unspecified period, to be determined in an ad hoc manner. Therefore, even if it were applied symmetrically, it would not achieve “trend stationarity.” Furthermore, the Fed likely will treat inflation overages differently than inflation shortfalls, so the rule is not even symmetrical. It is therefore at best only a “Semi-Wicksell Rule.”
Employment Policy
In addition to a proxy for the public’s experience-based inflation forecast, the Taylor Rule[1] ordinarily contains a term reflecting the unemployment gap (U‑gap), the deviation between current unemployment (U) and the estimated “Natural Unemployment Rate” that will result if inflation is fully in line with expectations. Typically, positive and negative values of U‑gap enter symmetrically, so that the Fed will be stimulative if U‑gap is positive, as in a recession, and equally restrictive if U‑gap is equally negative. Since unexpected inflation is zero on average, U‑gap will also be zero on average, abstracting from any non-linearity of the Phillips curve.
In its new monetary policy statement, the FOMC states that “the Committee’s policy decisions must be informed by assessments of the shortfalls of employment from its maximum level” (emphasis added). In his speech, Chairman Powell notes that this is in contrast to the 2012 Statement, which instead referred to deviations from its maximum level. Neither the Statement nor Powell’s speech indicates how “maximum employment” is to be measured—it can’t seriously mean 100% labor force participation with zero unemployment.
Powell does, however, provide a chart of “Real-Time Projections of Longer-Run Employment Rate,” with separate lines for the FOMC, Blue Chip, and CBO projections. His text explains that these are estimates of the natural rate of unemployment, in otherwise the minimum sustainable level of U without unexpected inflation. If “shortfalls of employment from its maximum level” are interpreted to mean “unemployment rates in excess of the natural rate of unemployment,” then the Fed is proposing to react expansively to positive values of U‑gap, but not at all to negative values. But if it does this, it will be stimulative on average, relative to what is required to meet its purported inflation target of 2 percent. The Fed may as well admit it is increasing its inflation target to 3 or 4 percent and then responding symmetrically to U‑gap.
The term “maximum employment” in the Statement was evidently dictated by the Fed’s mandate from Congress to promote “maximum employment” as well as “stable prices.” It is now generally recognized, by just about everybody but Congress, that using monetary policy to literally maximize employment would imply runaway inflation, and therefore be completely at odds with price stability. The Fed is already bending its “price stability” mandate to mean “just a little inflation.” In order to appear to be in compliance with its “maximum employment” mandate, the Fed is apparently likewise twisting it to mean “the maximum employment that is consistent with stable inflation.“
*********
[1] In John Taylor’s original 1993 formulation of what has come to be called the “Taylor Rule,” he in fact used what he called “y‑gap,” the gap between real income y and its trend level, with a positive coefficient, rather than U‑gap. However, it is now generally believed that real income is not trend-stationary, rendering y‑gap meaningless, so that most practitioners substitute an estimate of U‑gap, with a negative coefficient.
Disclaimer
This post was originally published at Alt‑M.org. The views and opinions expressed here are those of the author(s) and do not necessarily reflect the official policy or position of the Cato Institute. Any views or opinions are not intended to malign, defame, or insult any group, club, organization, company, or individual.
All content provided on this blog is for informational purposes only. The Cato Institute makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Cato Institute, as a publisher of this article, shall not be liable for any misrepresentations, errors or omissions in this content nor for the unavailability of this information. By reading this article and/or using the content, you agree that Cato Institute shall not be liable for any losses, injuries, or damages from the display or use of this content.
Related Tags
The Fed’s New Strategy: From Missed Target to Missed Opportunity
Although dedicated Fed watchers long saw it coming, the big news from Jackson Hole is that the Fed now has a new monetary policy strategy—the crowning achievement of the much-ballyhooed review of its “Strategy, Tools, and Communications” it launched in early 2019. It’s called “average inflation targeting,” and chances are that if you’re reading this you’re wondering (1) what the heck it means and (2) what, if any, difference it will make.
Don’t feel bad. More than a few seasoned monetary economists have been asking themselves the same questions, myself included. Having come up with some answers, I thought to share them with you, together with my reasons for concluding, first, that the new strategy isn’t likely to improve things; and, second, that the Fed has missed a chance to adopt a different strategy that really would work better.
Read the rest of this post →