1,026 research outputs found

    Getting it Right When You Might Be Wrong: The Choice Between Price-Level and Inflation Targeting

    Get PDF
    Canada’s 2 percent inflation targeting program works pretty well – but could targeting the price level work even better, especially when inflation and the price level might not be perfectly observed?monetary policy, price-level targeting, inflation targeting, Bank of Canada

    Understanding and Comparing Factor-Based Forecasts

    Get PDF
    Forecasting using "diffusion indices" has received a good deal of attention in recent years. The idea is to use the common factors estimated from a large panel of data to help forecast the series of interest. This paper assesses the extent to which the forecasts are influenced by (i) how the factors are estimated and/or (ii) how the forecasts are formulated. We find that for simple data-generating processes and when the dynamic structure of the data is known, no one method stands out to be systematically good or bad. All five methods considered have rather similar properties, though some methods are better in long-horizon forecasts, especially when the number of time series observations is small. However, when the dynamic structure is unknown and for more complex dynamics and error structures such as the ones encountered in practice, one method stands out to have smaller forecast errors. This method forecasts the series of interest directly, rather than the common and idiosyncratic components separately, and it leaves the dynamics of the factors unspecified. By imposing fewer constraints, and having to estimate a smaller number of auxiliary parameters, the method appears to be less vulnerable to misspecification, leading to improved forecasts.

    Understanding and Comparing Factor-Based Forecasts

    Get PDF
    Forecasting using `diffusion indices' has received a good deal of attention in recent years. The idea is to use the common factors estimated from a large panel of data to help forecast the series of interest. This paper assesses the extent to which the forecasts are influenced by (i) how the factors are estimated, and/or (ii) how the forecasts are formulated. We find that for simple data generating processes and when the dynamic structure of the data is known, no one method stands out to be systematically good or bad. All five methods considered have rather similar properties, though some methods are better in long horizon forecasts, especially when the number of time series observations is small. However, when the dynamic structure is unknown and for more complex dynamics and error structures such as the ones encountered in practice, one method stands out to have smaller forecast errors. This method forecasts the series of interest directly, rather than the common and idiosyncratic components separately, and it leaves the dynamics of the factors unspecified. By imposing fewer constraints, and having to estimate a smaller number of auxiliary parameters, the method appears to be less vulnerable to misspecification, leading to improved forecasts.

    Assessing changes in the monetary transmission mechanism: a VAR approach

    Get PDF
    Paper for a conference sponsored by the Federal Reserve Bank of New York entitled Financial Innovation and Monetary TransmissionEconomic conditions - United States ; Monetary policy

    Are More Data Always Better for Factor Analysis?

    Get PDF
    Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and the composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors, and yet the resulting factors are less useful for forecasting, and the answer is yes. Such a problem tends to arise when the idiosyncratic errors are cross-correlated. It can also arise if forecasting power is provided by a factor that is dominant in a small dataset but is a dominated factor in a larger dataset. In a real time forecasting exercise, we find that factors extracted from as few as 40 pre-screened series often yield satisfactory or even better results than using all 147 series. Weighting the data by their properties when constructing the factors also lead to improved forecasts. Our simulation analysis is unique in that special attention is paid to cross-correlated idiosyncratic errors, and we also allow the factors to have stronger loadings on some groups of series than others. It thus allows us to better understand the properties of the principal components estimator in empirical applications.

    DSGE Models in a Data-Rich Environment

    Get PDF
    Standard practice for the estimation of dynamic stochastic general equilibrium (DSGE) models maintains the assumption that economic variables are properly measured by a single indicator, and that all relevant information for the estimation is summarized by a small number of data series. However, recent empirical research on factor models has shown that information contained in large data sets is relevant for the evolution of important macroeconomic series. This suggests that conventional model estimates and inference based on estimated DSGE models might be distorted. In this paper, we propose an empirical framework for the estimation of DSGE models that exploits the relevant information from a data-rich environment. This framework provides an interpretation of all information contained in a large data set, and in particular of the latent factors, through the lenses of a DSGE model. The estimation involves Markov-Chain Monte-Carlo (MCMC) methods. We apply this estimation approach to a state-of-the-art DSGE monetary model. We find evidence of imperfect measurement of the model's theoretical concepts, in particular for inflation. We show that exploiting more information is important for accurate estimation of the model's concepts and shocks, and that it implies different conclusions about key structural parameters and the sources of economic fluctuations.

    DSGE Models in a Data-Rich Environment

    Get PDF
    Standard practice for the estimation of dynamic stochastic general equilibrium (DSGE) models maintains the assumption that economic variables are properly measured by a single indicator, and that all relevant information for the estimation is summarized by a small number of data series. However, recent empirical research on factor models has shown that information contained in large data sets is relevant for the evolution of important macroeconomic series. This suggests that conventional model estimates and inference based on estimated DSGE models might be distorted. In this paper, we propose an empirical framework for the estimation of DSGE models that exploits the relevant information from a data-rich environment. This framework provides an interpretation of all information contained in a large data set, and in particular of the latent factors, through the lenses of a DSGE model. The estimation involves Markov-Chain Monte-Carlo (MCMC) methods. We apply this estimation approach to a state-of-the-art DSGE monetary model. We find evidence of imperfect measurement of the model's theoretical concepts, in particular for inflation. We show that exploiting more information is important for accurate estimation of the model's concepts and shocks, and that it implies different conclusions about key structural parameters and the sources of economic fluctuations.

    Has Monetary Policy Become More Effective?

    Get PDF
    Recent research provides evidence of important changes in the U.S. economic environment over the last 40 years. This appears to be associated with an alteration of the monetary transmission mechanism. In this paper we investigate the implications for the evolution of monetary policy effectiveness. Using an identified VAR over the pre- and post-1980 periods we first provide evidence of a reduction in the effect of monetary policy shocks in the latter period. We then present and estimate a fully specified model that replicates well the dynamic response of output, inflation, and the federal funds rate to monetary policy shocks in both periods. Using the estimated structural model, we perform counterfactual experiments to determine the source of the observed change in the monetary transmission mechanism, as well as in the economy's response to supply and demand shocks. The main finding is that monetary policy has been more stabilizing in the recent past, as a result of both the way it has responded to shocks, but also by ruling out non-fundamental fluctuations.

    Monetary Policy in a Data-Rich Environment

    Get PDF
    Most empirical analyses of monetary policy have been confined to frameworks in which the Federal Reserve is implicitly assumed to exploit only a limited amount of information, despite the fact that the Fed actively monitors literally thousands of economic time series. This article explores the feasibility of incorporating richer information sets into the analysis, both positive and normative, of Fed policymaking. We employ a factor-model approach, developed by Stock and Watson (1999a,b), that permits the systematic information in large data sets to be summarized by relatively few estimated factors. With this framework, we reconfirm Stock and Watson's result that the use of large data sets can improve forecast accuracy, and we show that this result does not seem to depend on the use of finally revised (as opposed to 'real-time') data. We estimate policy reaction functions for the Fed that take into account its data-rich environment and provide a test of the hypothesis that Fed actions are explained solely by its forecasts of inflation and real activity. Finally, we explore the possibility of developing an 'expert system' that could aggregate diverse information and provide benchmark policy settings.
    corecore