Current Working Papers
This paper studies the joint dynamics of U.S. inflation and a term structure of average inflation predictions taken from the Survey of Professional Forecasters (SPF). We combine an unobserved components (UC) model of inflation and a sticky information forecast mechanism to study these dynamics. The UC model decomposes inflation into a trend and a gap component and measurement error. We innovate by endowing inflation gap persistence and the frequency of sticky information inflation forecast updating with drift. Stochastic volatility is imposed on the innovations to trend and gap inflation. The result is a non-linear state space model. The model is estimated on a sample from 1968Q4 to 2018Q3 using sequential Monte Carlo methods that include a particle learning filter and a Rao-Blackwellized particle smoother. Our estimates show that (i) longer horizon average SPF inflation predictions inform estimates of trend inflation; (ii) inflation gap persistence is procyclical before the Volcker disinflation and acyclical afterwards; (iii) by 1990 sticky information inflation forecast updating is less frequent than it was earlier in the sample; and (iv) the drop in the frequency of the sticky information forecast updating occurs at the same time persistent shocks become less important for explaining fluctuations in inflation. All told, the data calls for drift in inflation gap persistence and in the frequency of updating sticky information forecasts. (The appendix for the paper is here. Please, note the appendix is a large file.)
The views herein are those of the authors and do not represent the views of the Deutsche Bundesbank or the Eurosystem.
2. Sticky Professional Forecasts and the Unobserved Components Model of US Inflation, with Gregor W. Smith (October 2016).
This paper represents a fork in the road for our paper, Measuring the Slowly Evolving Trend in US Inflation with Professional Forecasts, which appears below. Gregor and I report tests of a joint model of U.S. inflation and professional forecasts of U.S. inflation. The joint model consists of an unobserved components model of inflation that decomposes it into trend and gap and a model of sticky information (SI). We show that trend inflation can be identify using inflation predictions at short horizon made by the average member of the Survey of Professional Forecasters (SPF). Similarly, inflation forecasts generated by the SI model can be tied to average SPF inflation predictions. Inverting the joint model produces estimates of the innovations to trend and gap inflation. Our tests ask if these innovations are serially correlated or persistent. The tests are robust to forecast horizon, stochastic volatility in trend and gap inflation, and place no restrictions on the covariance matrix of innovations to trend, gap, and/or SI inflation. On samples of GNP/GDP deflator inflation (CPI inflation) and the associated SPF predictions running from 1968Q4 (1981Q3) to 2016Q2, the results indicate U.S. inflation is sticky. However, the estimated innovations to trend and gap inflation are never jointly unpredictable. Changes to trend inflation exhibit statistically significant persistence. Our results suggest either the trend-cycle model of inflation is misspecified, the SI model is, or both.
3. Measuring the Slowly Evolving Trend in US Inflation with Professional Forecasts, with Gregor W. Smith (January 2015). Revise and resubmit at the Journal of Applied Econometrics.
Much research studies US inflation history with a trend-cycle model with unobserved components. A key feature of this model is that the trend may be viewed as the Fed’s evolving inflation target or long-horizon expected inflation. We provide a new way to measure the slowly evolving trend and the cycle (or inflation gap), based on forecasts from the Survey of Professional Forecasters. These forecasts may be treated either as rational expectations or as adjusting to those with sticky information. We find considerable evidence of inflation-gap persistence and some evidence of implicit sticky information. But statistical tests show we cannot reconcile these two widely used perspectives on US inflation and professional forecasts, the unobserved-components model and the sticky-information model.
This paper arms central bank policy makers with ways to think about interactions between financial stability and monetary policy. We frame the issue of whether to integrate financial stability into monetary policy operating rules by appealing to the observation that in actual economies financial markets are incomplete. Incomplete markets create financial market frictions that prevent economic agents from perfectly sharing risk; in the absence of frictions, financial (in)stability would be of no concern. Overcoming these frictions to improve risk sharing across economic agents is, in our view, the intent of policies geared toward ensuring financial stability. There are many definitions of financial stability. Although the definitions share the notion that financial stability becomes an issue for policy makers when a breakdown in risk-sharing arrangements in financial markets has a negative effect on real economic activity, we give several examples that show this notion is too general for thinking about the role monetary policy might have in smoothing shocks to financial stability. Examples include statistical models that seek to separate “good” from “bad” changes in private-sector debt aggregates, new Keynesian policy prescriptions grounded in neo-Wicksellian natural rate rules, and a historical episode involving the 1920s Federal Reserve. These examples raise a cautionary flag for policy attempts to control the growth and the composition of debt that financial markets produce. We conclude with some advice for revising central banks’ Monetary Policy Reports.
Slides for Invited Lectures
1. Semi-Plenary Talk "Realized and SPF Inflation Dynamics: Methods and Applications" at the 2019 North American Summer Meeting of the Econometric Society (at the University of Washington, Seattle, WA), 28 June 2019. The attached slides are a revised version of the talk that markets papers by Elmar and me (see above) and Gregor Smith and me (see above). These projects begin from the premise that (a) inflation expectations express private agents’ beliefs about the underlying factors driving observed inflation dynamics and (b) these beliefs can be recovered from nonlinear state space models (SSMs) estimated on realized inflation and the average inflation predictions of the Survey of Professional Forecasters (SPF). Motivation is that professional inflation inflation predictions are often superior to model-based inflation forecasts. Professional inflation inflation predictions also contain information about the credibility of monetary policy in the sense that longer-horizon average SPF inflation predictions have often been less than 20 basis points from the FOMC’s 2% inflation target since 2012Q1. Next, the slides sketch statistical and economic models that combine realized inflation and average inflation predictions of the SPF developed in these papers. The models are internally consistent in that the statistical and economic models yield term structures of inflation expectations and forecasts that impose cross-equation restrictions on the state space models. This is followed by an outline of the sequential Monte Carlo methods that Elmar and I and Gregor and I employ to estimate the states and parameters of the nonlinear state space models. The slides finish with several results reported in the paper by Elmar and me.
2. Lecture on "Central Banker’s Modeling Toolbox: One-for-All or All-for-One?" at the Institute for Monetary and Economic Studies of the Bank of Japan, 12 October 2017. The IMES-Bank of Japan invited me to give this lecture, which is an extended version of the presentation I gave at the Bank of Canada in November 2016 (see below). The lecture is framed by the Folk Theorem that "All models are false." The question becomes how economists at central banks can use falsifiable models to advise policy makers. The first task is to review several approaches economists have to evaluate alternative policies conditional on “the models on the table.” A goal is to motivate discussion using a diverse set of models at central banks to evaluate monetary policy because a corollary to the Folk Theorem is "It takes a model to beat a model." This part of the lecture is also aimed at convincing economists to consider the ways in which policy makers engage with models. Next, there is a discussion of times series models with time-varying parameters(TVPs). Using models with TVPs gives economists a chance to analyze alternative policies that have the potential to alter the expectations of private agents. The lecture concludes with a survey of the current state of new Keynesian DSGE models used at central banks.
3. Lecture on "Bringing Financial Stability into Monetary Policy" at the Institute for Monetary and Economic Studies of the Bank of Japan, 13 October 2017. The IMES-Bank of Japan invited me to give this lecture, which is an updated version of the paper by Eric Leeper and me (see above). The lecture begins from the premise that in models with a complete set of Arrow-Debreu contingent claims markets bankruptcy and default are states of the world. However, agents fully insure against these events because the AD securities are a means to completely diversify risk. Since actual modern economies always face an incomplete set of AD markets, the goals of financial stability policies are to improve existing risk sharing arrangements and to smooth shocks that prevent or inhibit risk sharing. From this point, the lecture discusses problems of defining and measuring financial trends and financial cycles. One question is whether the tools modern students of the business cycle employ can be used to define and measure financial trends and financial cycles without theory and models. A related question is how to integrate financial trends and cycle into a model that decomposes macro aggregates into growth trends and business cycles. The rest of the lecture is dedicated to a (brief) review of the several theories and models financial and macro economists have developed to study financial frictions in general equilibrium. The lecture argues that these theories and models have difficulties explaining insolvency shocks as the source and cause of financial crises.
Conference Presentations and Discussions
1. Discussion of "Welfare Effects of Tax Policy in Open Economies: Stabilization and Cooperation" by Jinill Kim and Sunghyun Kim in the session, Monetary Policy & Fiscal Policy when Interactions Matter, at the annual IJCB Annual Research Conference, The Interplay between Monetary Policy and Fiscal Policy, at the Czech National Bank, Prague, Czech Republic, 20 June 2017. (Unfortunately, I was unable to attend the conference.) My extended comments on the Kim and Kim paper are found here.
2. Discussion of "Endogenous Technology Adoption and R&D as Sources of Business Cycle Persistence" by Diego Anzoategui, Diego Comin, Mark Gertler, and Joseba Martinez in the AEA Session Slowdown Risk: The Quest for Sustainable Growth, Chicago, IL, 6 January 2017.
3. Presentation of "Central Banker's Modeling Toolbox: One-for-All or All-for-One?" in Session II of the Workshop on Central Bank Models (click on the link), at the Bank of Canada, Ottawa, Ontario, Canada, 17 November 2016. The Bank of Canada hosted a one day workshop for its staff and academics to discuss the current and potential future state of finance, macro, and forecast models at central banks. I was fortunate to be invited by the Bank of Canada to participate in a session under the title that my presentation uses. For the session, the Bank of Canada staff posed four questions. The questions are
a) How should central banks manage the trade‐off between internal consistency of models and their complexity?
b) In what situations is it desirable to use a single model for projection and policy analysis, and in what situations is it desirable to use multiple models? Why? Is it true for both monetary policy and financial stability risk assessment models?
c) What is the desired composition of the central bank modeling toolbox 5 to 10 years from now?
d) Is a single large‐scale projection model suitable for assessing financial vulnerabilities and analyzing risks around the macroeconomic outlook? Could a set of targeted minimalistic models provide better risk analysis?
My presentation, which responds to these questions, are my own views and do not represent the views of the Bank of Canada and/or its staff.
4. Discussion of "Real-Time Forecast Evaluation of DSGE Models with Stochastic Volatility" by Frank Diebold, Frank Schorfheide, and Minchul Shin at the Conference, New Developments in Measuring and Forecasting Financial Volatility, Duke University and the University of North Carolina, Chapel Hill, held in Durham, NC, 16-17 September 2016.
5. Discussion of "In Search of a Nominal Anchor: What Drives Inflation Expectations?" by Carlos Carvalho, Stefano Eusepi, Emanuel Moench, and Bruce Preston at the Workshop on Methods and Applications for DSGE Models, Federal Reserve Bank of Philadelphia, 16-17 October 2015.
6. Discussion of "A Survey-Based Term Structure of Inflation Expectations" by S. Boragan Aruoba at the 10th Conference on Real-Time Data Analysis, Methods and Applications, Federal Reserve Bank of Philadelphia, 10-11 October 2014.
7. Discussion of "Evaluating Conditional Forecasts from Vector Autoregressions" by Todd Clark and Michael McCracken at the CIREQ Econometrics Conference, Time Series and Financial Econometrics, Université de Montréal, Montréal, Québec, Canada 9-10 May 2014.
8. Discussion of "Signaling Effects of Monetary Policy" by Leonardo Melosi at the Spring 2012 Bundesbank/Philly Fed Conference, Monetary policy, Inflation, and International Linkages, Bundesbank's Training Centre, Eltville am Rhein, Germany 24-25 May 2012.
9. Discussion of "Modeling Monetary Policy" by Samuel Reynard and Andreas Schabert at the Conference on Models and Policies for Economies with Credit and Financial Instability, Federal Reserve Bank of Cleveland, 14-15 October 2009.