2 research outputs found

    Bounds on Portfolio Quality

    Full text link
    The signal-noise ratio of a portfolio of p assets, its expected return divided by its risk, is couched as an estimation problem on the sphere. When the portfolio is built using noisy data, the expected value of the signal-noise ratio is bounded from above via a Cramer-Rao bound, for the case of Gaussian returns. The bound holds for `biased' estimators, thus there appears to be no bias-variance tradeoff for the problem of maximizing the signal-noise ratio. An approximate distribution of the signal-noise ratio for the Markowitz portfolio is given, and shown to be fairly accurate via Monte Carlo simulations, for Gaussian returns as well as more exotic returns distributions. These findings imply that if the maximal population signal-noise ratio grows slower than the universe size to the 1/4 power, there may be no diversification benefit, rather expected signal-noise ratio can decrease with additional assets. As a practical matter, this may explain why the Markowitz portfolio is typically applied to small asset universes. Finally, the theorem is expanded to cover more general models of returns and trading schemes, including the conditional expectation case where mean returns are linear in some observable features, subspace constraints (i.e., dimensionality reduction), and hedging constraints

    A Fresh Look at the Bayesian Bounds of the Weiss-Weinstein Family

    No full text
    International audienceMinimal bounds on the mean square error (MSE) are generally used in order to predict the best achievable performance of an estimator for a given observation model. In this paper, we are interested in the Bayesian bound of the Weiss–Weinstein family. Among this family, we have Bayesian Cramér-Rao bound, the Bobrovsky–MayerWolf–Zakaï bound, the Bayesian Bhattacharyya bound, the Bobrovsky–Zakaï bound, the Reuven–Messer bound, and the Weiss–Weinstein bound. We present a unification of all these minimal bounds based on a rewriting of the minimum mean square error estimator (MMSEE) and on a constrained optimization problem. With this approach, we obtain a useful theoretical framework to derive new Bayesian bounds. For that purpose, we propose two bounds. First, we propose a generalization of the Bayesian Bhattacharyya bound extending the works of Bobrovsky, Mayer–Wolf, and Zakaï. Second, we propose a bound based on the Bayesian Bhattacharyya bound and on the Reuven–Messer bound, representing a generalization of these bounds. The proposed bound is the Bayesian extension of the deterministic Abel bound and is found to be tighter than the Bayesian Bhattacharyya bound, the Reuven–Messer bound, the Bobrovsky–Zakaï bound, and the Bayesian Cramér–Rao bound. We propose some closed-form expressions of these bounds for a general Gaussian observation model with parameterized mean. In order to illustrate our results, we present simulation results in the context of a spectral analysis problem
    corecore