26 research outputs found

    Improved inference in financial factor models

    Full text link
    Conditional heteroskedasticity of the error terms is a common occurrence in financial factor models, such as the CAPM and Fama-French factor models. This feature necessitates the use of heteroskedasticity consistent (HC) standard errors to make valid inference for regression coefficients. In this paper, we show that using weighted least squares (WLS) or adaptive least squares (ALS) to estimate model parameters generally leads to smaller HC standard errors compared to ordinary least squares (OLS), which translates into improved inference in the form of shorter confidence intervals and more powerful hypothesis tests. In an extensive empirical analysis based on historical stock returns and commonly used factors, we find that conditional heteroskedasticity is pronounced and that WLS and ALS can dramatically shorten confidence intervals compared to OLS, especially during times of financial turmoil

    Factor-mimicking portfolios for climate risk

    Get PDF
    We propose and implement a procedure to optimally hedge climate change risk. First, we construct climate risk indices through textual analysis of newspapers. Second, we present a new approach to compute factor-mimicking portfolios to build climate risk hedge portfolios. The new mimicking portfolio approach is much more efficient than traditional sorting or maximum correlation approaches by taking into account new methodologies of estimating large-dimensional covariance matrices in short samples. In an extensive empirical out-of-sample performance test, we demonstrate the superior all-around performance delivering markedly higher and statistically significant alphas and betas with the climate risk indices

    Large dynamic covariance matrices: Enhancements based on intraday data

    Full text link
    Multivariate GARCH models do not perform well in large dimensions due to the so-called curse of dimensionality. The recent DCC-NL model of Engle et al. (2019) is able to overcome this curse via nonlinear shrinkage estimation of the unconditional correlation matrix. In this paper, we show how performance can be increased further by using open/high/low/close (OHLC) price data instead of simply using daily returns. A key innovation, for the improved modeling of not only dynamic variances but also of dynamic correlations, is the concept of a regularized return, obtained from a volatility proxy in conjunction with a smoothed sign of the observed return

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Oops! I Shrunk the Sample Covariance Matrix Again: Blockbuster Meets Shrinkage

    Full text link
    Existing shrinkage techniques struggle to model the covariance matrix of asset returns in the presence of multiple-asset classes. Therefore, we introduce a Blockbuster shrinkage estimator that clusters the covariance matrix accordingly. Besides the definition and derivation of a new asymptotically optimal linear shrinkage estimator, we propose an adaptive Blockbuster algorithm that clusters the covariance matrix even if the (number of) asset classes are unknown and change over time. It displays superior all-around performance on historical data against a variety of state-of-the-art linear shrinkage competitors. Additionally, we find that for small- and medium-sized investment universes the proposed estimator outperforms even recent nonlinear shrinkage techniques. Hence, this new estimator can be used to deliver more efficient portfolio selection and detection of anomalies in the cross-section of asset returns. Furthermore, due to the general structure of the proposed Blockbuster shrinkage estimator, the application is not restricted to financial problems

    Subsampled Factor Models for Asset Pricing: The Rise of Vasa

    Full text link
    We propose a new method, VASA, based on variable subsample aggregation of model predictions for equity returns using a large-dimensional set of factors. To demonstrate the effectiveness, robustness, and dimension reduction power of VASA, we perform a comparative analysis between state-of-the-art machine learning algorithms. As a performance measure, we explore not only the global predictive but also the stock-specific R2's and their distribution. While the global R2 indicates the average forecasting accuracy, we find that high variability in the stock-specific R2's can be detrimental for the portfolio performance, due to the higher prediction risk. Since VASA shows minimal variability, portfolios formed on this method outperform the portfolios based on more complicated methods like random forests and neural nets
    corecore