Current Working Papers
This paper studies the joint dynamics of U.S. inflation and average inflation predictions of the Survey of Professional Forecasters (SPF) on a sample from 1968Q4 to 2016Q1. The joint data generating process (DGP) is the unobserved components (UC) model of Stock and Watson (2007, “Why has US inflation become harder to forecast?,” Journal of Money, Credit and Banking 39(S1), 3-33) and the Coibion and Gorodnichenko (2015, “Information rigidity and the expectations formation process: A simple framework and new facts,” American Economic Review 105, 2644-2678) version of the sticky information (SI) model of Mankiw and Reis (2002, “Sticky information versus sticky prices: A proposal to replace the New Keynesian Phillips curve,” Quarterly Journal of Economics 117, 1295-1328). We add drift to gap inflation persistence in the Stock and Watson (SW)-UC model and place a time-varying frequency of forecast updating into the SI model. The joint DGP is a nonlinear state space model, which is estimated using Bayesian tools
grounded in the auxiliary particle filter, particle learning, and the particle smoother. Estimates of trend inflation, which peak at about seven percent during the 1981–1982 recession, depend on the average SPF inflation predictions, especially at longer horizons. Gap inflation is estimated at nearly nine percent in 1975Q1, which is almost three times larger than at any other time in the sample. We also report that gap inflation stochastic volatility (SV) spikes during the 1973-1975 recession while the peak in trend inflation SV occurs during the 1981-1982 recession.
These estimates of SV display co-movement with time-variation in SI inflation updating because it occurs frequently during the inflation of the 1970s and Volcker disinflation while updating becomes less frequent in the 1990s. We conclude the average member of the SPF is sensitive to the impact of permanent shocks on the conditional mean of inflation. (Slides for recent workshop presentation. The slides discuss the Storvik (2002, "Particle filters for state-space models with the presence of unknown static parameters," IEEE Transactions on Signal Processing 50, 281-299) particle learning estimator, the Lindsten, Bunch, Särkkä, Schön, and Godsill (2016, "Rao-Blackwellized particle smoothers for conditionally linear Gaussian models," IEEE Journal of Selected Topics in Signal Processing 10, 353-365) particle smoother, and results that are to be added to the paper.) The views herein are those of the authors and do not represent the views of the Bank for International Settlements.
2. Sticky Professional Forecasts and the Unobserved Components Model of US Inflation, with Gregor W. Smith (October 2016).
This paper represents a fork in the road for our paper, Measuring the Slowly Evolving Trend in US Inflation with Professional Forecasts, which appears below. Gregor and I report tests of a joint model of U.S. inflation and professional forecasts of U.S. inflation. The joint model consists of an unobserved components model of inflation that decomposes it into trend and gap and a model of sticky information (SI). We show that trend inflation can be identify using inflation predictions at short horizon made by the average member of the Survey of Professional Forecasters (SPF). Similarly, inflation forecasts generated by the SI model can be tied to average SPF inflation predictions. Inverting the joint model produces estimates of the innovations to trend and gap inflation. Our tests ask if these innovations are serially correlated or persistent. The tests are robust to forecast horizon, stochastic volatility in trend and gap inflation, and place no restrictions on the covariance matrix of innovations to trend, gap, and/or SI inflation. On samples of GNP/GDP deflator inflation (CPI inflation) and the associated SPF predictions running from 1968Q4 (1981Q3) to 2016Q2, the results indicate U.S. inflation is sticky. However, the estimated innovations to trend and gap inflation are never jointly unpredictable. Changes to trend inflation exhibit statistically significant persistence. Our results suggest either the trend-cycle model of inflation is misspecified, the SI model is, or both.
3. Measuring the Slowly Evolving Trend in US Inflation with Professional Forecasts, with Gregor W. Smith (January 2015). Revise and resubmit at the Journal of Applied Econometrics.
Much research studies US inflation history with a trend-cycle model with unobserved components. A key feature of this model is that the trend may be viewed as the Fed’s evolving inflation target or long-horizon expected inflation. We provide a new way to measure the slowly evolving trend and the cycle (or inflation gap), based on forecasts from the Survey of Professional Forecasters. These forecasts may be treated either as rational expectations or as adjusting to those with sticky information. We find considerable evidence of inflation-gap persistence and some evidence of implicit sticky information. But statistical tests show we cannot reconcile these two widely used perspectives on US inflation and professional forecasts, the unobserved-components model and the sticky-information model.
This paper arms central bank policy makers with ways to think about interactions between financial stability and monetary policy. We frame the issue of whether to integrate financial stability into monetary policy operating rules by appealing to the observation that in actual economies financial markets are incomplete. Incomplete markets create financial market frictions that prevent economic agents from perfectly sharing risk; in the absence of frictions, financial (in)stability would be of no concern. Overcoming these frictions to improve risk sharing across economic agents is, in our view, the intent of policies geared toward ensuring financial stability. There are many definitions of financial stability. Although the definitions share the notion that financial stability becomes an issue for policy makers when a breakdown in risk-sharing arrangements in financial markets has a negative effect on real economic activity, we give several examples that show this notion is too general for thinking about the role monetary policy might have in smoothing shocks to financial stability. Examples include statistical models that seek to separate “good” from “bad” changes in private-sector debt aggregates, new Keynesian policy prescriptions grounded in neo-Wicksellian natural rate rules, and a historical episode involving the 1920s Federal Reserve. These examples raise a cautionary flag for policy attempts to control the growth and the composition of debt that financial markets produce. We conclude with some advice for revising central banks’ Monetary Policy Reports.
Slides for Conference Presentations and Discussions
1. Discussion of "Endogenous Technology Adoption and R&D as Sources of Business Cycle Persistence" by Diego Anzoategui, Diego Comin, Mark Gertler, and Joseba Martinez in the AEA Session Slowdown Risk: The Quest for Sustainable Growth, Chicago, IL, 6 January 2017.
2. Presentation of "Central Banker's Modeling Toolbox: One-for-All or All-for-One?" in Session II of the Workshop on Central Bank Models (click on the link), at the Bank of Canada, Ottawa, Ontario, Canada, 17 November 2016. The Bank of Canada hosted a one day workshop for its staff and academics to discuss the current and potential future state of finance, macro, and forecast models at central banks. I was fortunate to be invited by the Bank of Canada to participate in a session under the title that my presentation uses. For the session, the Bank of Canada staff posed four questions. The questions are
a) How should central banks manage the trade‐off between internal consistency of models and their complexity?
b) In what situations is it desirable to use a single model for projection and policy analysis, and in what situations is it desirable to use multiple models? Why? Is it true for both monetary policy and financial stability risk assessment models?
c) What is the desired composition of the central bank modeling toolbox 5 to 10 years from now?
d) Is a single large‐scale projection model suitable for assessing financial vulnerabilities and analyzing risks around the macroeconomic outlook? Could a set of targeted minimalistic models provide better risk analysis?
My presentation, which responds to these questions, are my own views and do not represent the views of the Bank of Canada and/or its staff.
3. Discussion of "Real-Time Forecast Evaluation of DSGE Models with Stochastic Volatility" by Frank Diebold, Frank Schorfheide, and Minchul Shin at the Conference, New Developments in Measuring and Forecasting Financial Volatility, Duke University and the University of North Carolina, Chapel Hill, held in Durham, NC, 16-17 September 2016.
4. Discussion of "In Search of a Nominal Anchor: What Drives Inflation Expectations?" by Carlos Carvalho, Stefano Eusepi, Emanuel Moench, and Bruce Preston at the Workshop on Methods and Applications for DSGE Models, Federal Reserve Bank of Philadelphia, 16-17 October 2015.
5. Discussion of "A Survey-Based Term Structure of Inflation Expectations" by S. Boragan Aruoba at the 10th Conference on Real-Time Data Analysis, Methods and Applications, Federal Reserve Bank of Philadelphia, 10-11 October 2014.
6. Discussion of "Evaluating Conditional Forecasts from Vector Autoregressions" by Todd Clark and Michael McCracken at the CIREQ Econometrics Conference, Time Series and Financial Econometrics, Université de Montréal, Montréal, Québec, Canada 9-10 May 2014.
7. Discussion of "Signaling Effects of Monetary Policy" by Leonardo Melosi at the Spring 2012 Bundesbank/Philly Fed Conference, Monetary policy, Inflation, and International Linkages, Bundesbank's Training Centre, Eltville am Rhein, Germany 24-25 May 2012.
8. Discussion of "Modeling Monetary Policy" by Samuel Reynard and Andreas Schabert at the Conference on Models and Policies for Economies with Credit and Financial Instability, Federal Reserve Bank of Cleveland, 14-15 October 2009.