916 research outputs found

    Diversification and the Optimal Construction of Basis Portfolios

    Get PDF
    Nontrivial diversification possibilities arise when a factor model describes security returns. In this paper, we provide a comprehensive examination of the merits of various strategies for constructing basis portfolios that are, in principle, highly correlated with the common factors underlying security returns. Three main conclusions emerge from our study. First, increasing the number of securities included in the analysis dramatically improves basis portfolio performance. Our results indicate that factor models involving 750 securities provide markedly superior performance to those involving 30 or 250 securities. Second, comparatively efficient estimation procedures such as maximum likelihood and restricted maximum likelihood factor analysis (which imposes the APT mean restriction) significantly outperform the less efficient instrumental variables and principal components procedures that have been proposed in the literature. Third, a variant of the usual Fama-MacBeth portfolio formation procedure, which we call the minimum idiosyncratic risk portfolio formation procedure, outperformed the Fama-MacBeth procedure and proved equal to or better than more expensive quadratic programming procedures.

    On the Computational Complexity of MCMC-based Estimators in Large Samples

    Full text link
    In this paper we examine the implications of the statistical large sample theory for the computational complexity of Bayesian and quasi-Bayesian estimation carried out using Metropolis random walks. Our analysis is motivated by the Laplace-Bernstein-Von Mises central limit theorem, which states that in large samples the posterior or quasi-posterior approaches a normal density. Using the conditions required for the central limit theorem to hold, we establish polynomial bounds on the computational complexity of general Metropolis random walks methods in large samples. Our analysis covers cases where the underlying log-likelihood or extremum criterion function is possibly non-concave, discontinuous, and with increasing parameter dimension. However, the central limit theorem restricts the deviations from continuity and log-concavity of the log-likelihood or extremum criterion function in a very specific manner. Under minimal assumptions required for the central limit theorem to hold under the increasing parameter dimension, we show that the Metropolis algorithm is theoretically efficient even for the canonical Gaussian walk which is studied in detail. Specifically, we show that the running time of the algorithm in large samples is bounded in probability by a polynomial in the parameter dimension dd, and, in particular, is of stochastic order d2d^2 in the leading cases after the burn-in period. We then give applications to exponential families, curved exponential families, and Z-estimation of increasing dimension.Comment: 36 pages, 2 figure
    • …
    corecore