19 research outputs found
Confidence intervals in regressions with estimated factors and idiosyncratic components
This paper shows that HAC standard errors must be adjusted when constructing confidence intervals in regressions involving both the factors and idiosyncratic components estimated from a big dataset. This result is in contrast to the seminal result of Bai and Ng (2006) where the assumption that âT/Nâ0 is sufficient to eliminate the effect of estimation error, where T and N are the time-series and cross-sectional dimensions. Simulations show vast improvements in the coverage rates of the adjusted confidence intervals over the unadjusted ones
Revisiting targeted factors
This paper proposes new methods for âtargetingâ factors estimated from a big dataset. We suggest that forecasts of economic variables can be improved by tuning factor estimates: (i) so that they are both more relevant for a specific target variable; and (ii) so that variables with considerable idiosyncratic noise are down-weighted prior to factor estimation. Existing targeted factor methodologies are limited to estimating the factors with only one of these two objectives in mind. We therefore combine these ideas by providing new weighted principal components analysis (PCA) procedures and a targeted generalized PCA (TGPCA) procedure. These methods offer a flexible combination of both types of targeting that is new to the literature. We illustrate this empirically by forecasting a range of US macroeconomic variables, finding that our combined approach yields important improvements over competing methods, consistently surviving elimination in the model confidence set procedure. Copyright © 2016 John Wiley & Sons, Ltd
Testing Nowcast Monotonicity with Estimated Factors
This article proposes a test to determine whether âbig dataâ nowcasting methods, which have become an important tool to many public and private institutions, are monotonically improving as new information becomes available. The test is the first to formalize existing evaluation procedures from the nowcasting literature. We place particular emphasis on models involving estimated factors, since factor-based methods are a leading case in the high-dimensional empirical nowcasting literature, although our test is still applicable to small-dimensional set-ups like bridge equations and MIDAS models. Our approach extends a recent methodology for testing many moment inequalities to the case of nowcast monotonicity testing, which allows the number of inequalities to grow with the sample size. We provide results showing the conditions under which both parameter estimation error and factor estimation error can be accommodated in this high-dimensional setting when using the pseudo out-of-sample approach. The finite sample performance of our test is illustrated using a wide range of Monte Carlo simulations, and we conclude with an empirical application of nowcasting U.S. real gross domestic product (GDP) growth and five GDP sub-components. Our test results confirm monotonicity for all but one sub-component (government spending), suggesting that the factor-augmented model may be misspecified for this GDP constituent. Supplementary materials for this article are available online
Testing Quantile Forecast Optimality
Quantile forecasts made across multiple horizons have become an important
output of many financial institutions, central banks and international
organisations. This paper proposes misspecification tests for such quantile
forecasts that assess optimality over a set of multiple forecast horizons
and/or quantiles. The tests build on multiple Mincer-Zarnowitz quantile
regressions cast in a moment equality framework. Our main test is for the null
hypothesis of autocalibration, a concept which assesses optimality with respect
to the information contained in the forecasts themselves. We provide an
extension that allows to test for optimality with respect to larger information
sets and a multivariate extension. Importantly, our tests do not just inform
about general violations of optimality, but may also provide useful insights
into specific forms of sub-optimality. A simulation study investigates the
finite sample performance of our tests, and two empirical applications to
financial returns and U.S. macroeconomic series illustrate that our tests can
yield interesting insights into quantile forecast sub-optimality and its
causes
Replicating rockets and feathers
This paper revisits the literature of asymmetric adjustment in gasoline and diesel prices, also known as the ârockets and feathers' hypothesis, to consider the issue of the replication of empirical research. We examine the notion of replication versus robustness proposed by Clemens (2017) and add to the literature with a further review of recent and historic work on replication in economics and other disciplines. We then focus on the rockets and feathers literature, finding that the majority of empirical work performs robustness checks rather than replication of earlier papers. We perform two contrasting replication case studies motivated by the ideas of misspecification analysis, dynamic specification, mark-up, pass-through and asymmetric adjustment. In the first case study we find that results are both replicable and robust, even when data specifications are not identical. However, in the second case study we find that the results using the original sample are overturned when reanalysing the problem using an improved model specification. Furthermore, when extending the sample with more recent data and using a more sophisticated method, asymmetry is detected in both petrol and diesel pricing; different to the findings of the original study. Particular care must be taken in future rockets and feathers replications with regard to model specification and methodology
Recommended from our members
CO2 emissions and economic activity:a short-to-medium run perspective
This paper looks at the short-to-medium run impact of economic activity on CO2 emissions in the United States, shifting the existing focus away from the long-run Environmental Kuznets Curve (EKC). Our novel methodological approach combines discrete wavelet transforms with dynamic factor models. This allows us to (i) estimate economic and emissions cycles at different frequencies, and (ii) let economic activity be estimated from many different economic variables, rather than focussing on a small number as in existing studies. From our results, one might at first conclude that emissions are not linked to economic activity in the short-run. However, when looking at the cycles uncovered at timescales of length one to three years, we see that there are indeed strong linkages. Policymakers therefore cannot be exclusively long-termist when evaluating the impact of economic policy on the environment
Topics in forecasting with factor-augmented models.
This thesis makes three distinct contributions to the literature on factor-augmented models for forecasting economic time series using big datasets.
The first chapter extends Diebold-Mariano-West type tests of forecast accuracy to apply to factor-augmented models where both factors and model coefficients are estimated in a rolling out-of-sample estimation procedure. This set-up poses new challenges as the sign of neither the factors nor factor-augmented model parameters are identified in different rolling windows. We propose a novel new identification strategy which removes arbitrary sign-changing in the sequence of out-of-sample parameter estimates and allows us to establish the asymptotic normality of the Diebold-Mariano test statistic. We propose a new bootstrap procedure for rolling factor estimates as existing bootstrap methods cannot deal with the generated regressor structure of the factors.
The second chapter provides consistent information criteria for the selection of forecasting models which use both the idiosyncratic and common factor components of a big dataset. This procedure differs to existing factor-augmented model selection techniques as it depends on estimates of both the factors and the idiosyncratic components. We show that the combined estimation error vanishes at a slower rate than in the case of pure factor-augmented models in most standard economic forecasting scenarios, which makes existing information criteria inconsistent. We solve this problem by proposing modified information criteria which account for the additional source of estimation error.
The final chapter aims to improve factor-based forecasts by 'targeting' factor estimates with two objectives: (i) so they are more relevant for a specific target variable, and (ii) so that variables with high levels of idiosyncratic noise are down-weighted prior to factor estimation. Existing targeted factor methodologies are only capable of estimating factors with one of these two objectives in mind. We suggest new Weighted Principal Components Analysis (WPCA) and Targeted Generalized PCA (TGPCA) procedures, which both use LASSO-type pre-selection
Topics in forecasting with factor-augmented models.
This thesis makes three distinct contributions to the literature on factor-augmented models for forecasting economic time series using big datasets. The first chapter extends Diebold-Mariano-West type tests of forecast accuracy to apply to factor-augmented models where both factors and model coefficients are estimated in a rolling out-of-sample estimation procedure. This set-up poses new challenges as the sign of neither the factors nor factor-augmented model parameters are identified in different rolling windows. We propose a novel new identification strategy which removes arbitrary sign-changing in the sequence of out-of-sample parameter estimates and allows us to establish the asymptotic normality of the Diebold-Mariano test statistic. We propose a new bootstrap procedure for rolling factor estimates as existing bootstrap methods cannot deal with the generated regressor structure of the factors. The second chapter provides consistent information criteria for the selection of forecasting models which use both the idiosyncratic and common factor components of a big dataset. This procedure differs to existing factor-augmented model selection techniques as it depends on estimates of both the factors and the idiosyncratic components. We show that the combined estimation error vanishes at a slower rate than in the case of pure factor-augmented models in most standard economic forecasting scenarios, which makes existing information criteria inconsistent. We solve this problem by proposing modified information criteria which account for the additional source of estimation error. The final chapter aims to improve factor-based forecasts by 'targeting' factor estimates with two objectives: (i) so they are more relevant for a specific target variable, and (ii) so that variables with high levels of idiosyncratic noise are down-weighted prior to factor estimation. Existing targeted factor methodologies are only capable of estimating factors with one of these two objectives in mind. We suggest new Weighted Principal Components Analysis (WPCA) and Targeted Generalized PCA (TGPCA) procedures, which both use LASSO-type pre-selection
Recommended from our members
Nowcasting from Cross-Sectionally Dependent Panels
This paper builds a mixedâfrequency panel data model for nowcasting economic variables across many countries. The model extends the mixedâfrequency panel vector autoregression (MFâPVAR) to allow for heterogeneous coefficients and a multifactor error structure to model crossâsectional dependence. We propose a modified common correlated effects (CCE) estimation technique which performs well in simulations. The model is applied in two distinct settings: nowcasting gross domestic product (GDP) growth for a pool of advanced and emerging economies and nowcasting inflation across many European countries. Our method is capable of beating standard benchmark models and can produce updated nowcasts whenever data releases occur in any country in the panel
Dynamic persistence in the unemployment rate of OECD countries
Previous studies use a variety of increasingly advanced unit root tests to determine whether Blanchard and Summers (1986) hysteresis theory of unemployment or the classical 'natural' rate theory of Friedman (1968) and Phelps (1967, 1968) is most relevant for a given country. However these tests all specify a unit root under the null hypothesis against a stationary alternative, such as in the paper by Lee and Chang (2008), making the two theories of unemployment mutually exclusive over the sample period. This paper moves away from this dichotomy by allowing for switches between hysteresis and the natural rate theory using the recently developed test of Leybourne, Kim and Taylor (2007). We find that in countries like the United Kingdom, the natural rate theory is detected in the post-World War Two period of stabilisation: the time leading up to the seminal works of Friedman and Phelps. Hysteresis is found over the First World War and Great Depression periods, and in the period from the 1970s; a time characterised by rising trade union power. We also compute numerical measures of persistence using grid-bootstrap estimates of the autoregressive parameter, following Hansen (1999). (C) 2010 Elsevier B.V. All rights reserved