17 research outputs found
Confidence Intervals in High-Dimensional Regression Based on Regularized Pseudoinverses
In modern data sets, the number of available variables can greatly exceed the number of observations. In this paper we show how valid confidence intervals can be constructed by approximating the inverse covariance matrix by a scaled Moore-Penrose pseudoinverse, and using the lasso to perform a bias correction. In addition, we propose random least squares, a new regularization technique which yields narrower confidence intervals with the same theoretical validity. Random least squares estimates the inverse covariance matrix using multiple low-dimensional random projections of the data. This is shown to be equivalent to a generalized form of ridge regularization. The methods are illustrated in Monte Carlo experiments and an empirical example using quarterly data from the FRED-QD database, where gross domestic product is explained by a large number of macroeconomic and financial indicators
Forecasting using Random Subspace Methods
Random subspace methods are a new approach to obtain accurate forecasts in high-dimensional regression settings. Forecasts are constructed by averaging over forecasts from many submodels generated by random selection or random Gaussian weighting of predictors. This paper derives upper bounds on the asymptotic mean squared forecast error of these strategies, which show that the methods are particularly suitable for macroeconomic forecasting. An empirical application to the FRED-MD data confirms the theoretical findings, and shows random subspace methods to outperform competing methods on key macroeconomic indicators
Inference on LATEs with covariates
In theory, two-stage least squares (TSLS) identifies a weighted average of
covariate-specific local average treatment effects (LATEs) from a saturated
specification without making parametric assumptions on how available covariates
enter the model. In practice, TSLS is severely biased when saturation leads to
a number of control dummies that is of the same order of magnitude as the
sample size, and the use of many, arguably weak, instruments. This paper
derives asymptotically valid tests and confidence intervals for an estimand
that identifies the weighted average of LATEs targeted by saturated TSLS, even
when the number of control dummies and instrument interactions is large. The
proposed inference procedure is robust against four key features of saturated
economic data: treatment effect heterogeneity, covariates with rich support,
weak identification strength, and conditional heteroskedasticity
Panel Forecasting with Asymmetric Grouping
This paper proposes an asymmetric grouping estimator for panel data
forecasting. The estimator relies on the observation that the bias-
variance trade-off in potentially heterogeneous panel data may be dif-
ferent across individuals. Hence, the group of individuals used for parameter estimation that is optimal in terms of forecast accuracy, may be different for each individual. For a specific individual, the estimator uses cross-validation to estimate the bias-variance of all individual groupings, and uses the parameter estimates of the optimal grouping to produce the individual-specific forecast. Integer programming and screening methods deal with the combinatorial problem of a large number of individuals. A simulation study and an application to market leverage forecasts of U.S. firms demonstrate the promising performance of our new estimator
Bayesian Forecasting in Economics and Finance: A Modern Review
The Bayesian statistical paradigm provides a principled and coherent approach
to probabilistic forecasting. Uncertainty about all unknowns that characterize
any forecasting problem -- model, parameters, latent states -- is able to be
quantified explicitly, and factored into the forecast distribution via the
process of integration or averaging. Allied with the elegance of the method,
Bayesian forecasting is now underpinned by the burgeoning field of Bayesian
computation, which enables Bayesian forecasts to be produced for virtually any
problem, no matter how large, or complex. The current state of play in Bayesian
forecasting in economics and finance is the subject of this review. The aim is
to provide the reader with an overview of modern approaches to the field, set
in some historical context; and with sufficient computational detail given to
assist the reader with implementation.Comment: The paper is now published online at:
https://doi.org/10.1016/j.ijforecast.2023.05.00
What Do Professional Forecasters Actually Predict?
In this paper we study what professional forecasters actually explain. We use spectral analysis and state space modeling to decompose economic time series into a trend, a business-cycle, and an irregular component. To examine which components are captured by professional forecasters we regress their forecasts on the estimated components extracted from both the spectral analysis and the state space model. For both decomposition methods we find that the Survey of Professional Forecasters can predict almost all variation in the time series due to the trend and the business-cycle, but the forecasts contain little information about the variation in the irregular component