34 research outputs found

    Robust Modeling Using Non-Elliptically Contoured Multivariate t Distributions

    Full text link
    Models based on multivariate t distributions are widely applied to analyze data with heavy tails. However, all the marginal distributions of the multivariate t distributions are restricted to have the same degrees of freedom, making these models unable to describe different marginal heavy-tailedness. We generalize the traditional multivariate t distributions to non-elliptically contoured multivariate t distributions, allowing for different marginal degrees of freedom. We apply the non-elliptically contoured multivariate t distributions to three widely-used models: the Heckman selection model with different degrees of freedom for selection and outcome equations, the multivariate Robit model with different degrees of freedom for marginal responses, and the linear mixed-effects model with different degrees of freedom for random effects and within-subject errors. Based on the Normal mixture representation of our t distribution, we propose efficient Bayesian inferential procedures for the model parameters based on data augmentation and parameter expansion. We show via simulation studies and real examples that the conclusions are sensitive to the existence of different marginal heavy-tailedness

    Optimal Linear Shrinkage Estimator for Large Dimensional Precision Matrix

    Full text link
    In this work we construct an optimal shrinkage estimator for the precision matrix in high dimensions. We consider the general asymptotics when the number of variables pβ†’βˆžp\rightarrow\infty and the sample size nβ†’βˆžn\rightarrow\infty so that p/nβ†’c∈(0,+∞)p/n\rightarrow c\in (0, +\infty). The precision matrix is estimated directly, without inverting the corresponding estimator for the covariance matrix. The recent results from the random matrix theory allow us to find the asymptotic deterministic equivalents of the optimal shrinkage intensities and estimate them consistently. The resulting distribution-free estimator has almost surely the minimum Frobenius loss. Additionally, we prove that the Frobenius norms of the inverse and of the pseudo-inverse sample covariance matrices tend almost surely to deterministic quantities and estimate them consistently. At the end, a simulation is provided where the suggested estimator is compared with the estimators for the precision matrix proposed in the literature. The optimal shrinkage estimator shows significant improvement and robustness even for non-normally distributed data.Comment: 26 pages, 5 figures. This version includes the case c>1 with the generalized inverse of the sample covariance matrix. The abstract was updated accordingl

    Stein Estimation for Spherically Symmetric Distributions: Recent Developments

    Full text link
    This paper reviews advances in Stein-type shrinkage estimation for spherically symmetric distributions. Some emphasis is placed on developing intuition as to why shrinkage should work in location problems whether the underlying population is normal or not. Considerable attention is devoted to generalizing the "Stein lemma" which underlies much of the theoretical development of improved minimax estimation for spherically symmetric distributions. A main focus is on distributional robustness results in cases where a residual vector is available to estimate an unknown scale parameter, and, in particular, in finding estimators which are simultaneously generalized Bayes and minimax over large classes of spherically symmetric distributions. Some attention is also given to the problem of estimating a location vector restricted to lie in a polyhedral cone.Comment: Published in at http://dx.doi.org/10.1214/10-STS323 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Reversing the Stein Effect

    Full text link
    The Reverse Stein Effect is identified and illustrated: A statistician who shrinks his/her data toward a point chosen without reliable knowledge about the underlying value of the parameter to be estimated but based instead upon the observed data will not be protected by the minimax property of shrinkage estimators such as that of James and Stein, but instead will likely incur a greater error than if shrinkage were not used.Comment: Published in at http://dx.doi.org/10.1214/09-STS278 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    An extended class of minimax generalized Bayes estimators of regression coefficients

    Full text link
    We derive minimax generalized Bayes estimators of regression coefficients in the general linear model with spherically symmetric errors under invariant quadratic loss for the case of unknown scale. The class of estimators generalizes the class considered in Maruyama and Strawderman (2005) to include non-monotone shrinkage functions
    corecore