20,459 research outputs found

    Improved estimation of the mean vector for Student-t model.

    Get PDF
    Improved James-Stein type estimation of the mean vector \mbox{\boldmath \mu} of a multivariate Student-t population of dimension p with ν\nu degrees of freedom is considered. In addition to the sample data, uncertain prior information on the value of the mean vector, in the form of a null hypothesis, is used for the estimation. The usual maximum likelihood estimator (mle) of \mbox{\boldmath \mu} is obtained and a test statistic for testing H_0: \mbox{\boldmath \mu} = \mbox{\boldmath \mu}_0 is derived. Based on the mle of \mbox{\boldmath \mu} and the test statistic the preliminary test estimator (PTE), Stein-type shrinkage estimator (SE) and positive-rule shrinkage estimator (PRSE) are defined. The bias and the quadratic risk of the estimators are evaluated. The relative performances of the estimators are investigated by analyzing the risks under different conditions. It is observed that the PRSE dominates over the other three estimators, regardless of the validity of the null hypothesis and the value $\nu.

    A flexible shrinkage operator for fussy grouped variable selection

    Get PDF
    Existing grouped variable selection methods rely heavily on prior group information, thus they may not be reliable if an incorrect group assignment is used. In this paper, we propose a family of shrinkage variable selection operators by controlling the k-th largest norm (KAN). The proposed KAN method exhibits some flexible group-wise variable selection naturally even though no correct prior group information is available. We also construct a group KAN shrinkage operator using a composite of KAN constraints. Neither ignoring nor relying completely on prior group information, the group KAN method has the flexibility of controlling within group strength and therefore can reduce the effect caused by incorrect group information. Finally, we investigate an unbiased estimator of the degrees of freedom for (group) KAN estimates in the framework of Stein’s unbiased risk estimation. Extensive simulation studies and real data analysis are performed to demonstrate the advantage of KAN and group KAN over the LASSO and group LASSO, respectively

    Excess Optimism: How Biased is the Apparent Error of an Estimator Tuned by SURE?

    Full text link
    Nearly all estimators in statistical prediction come with an associated tuning parameter, in one way or another. Common practice, given data, is to choose the tuning parameter value that minimizes a constructed estimate of the prediction error of the estimator; we focus on Stein's unbiased risk estimator, or SURE (Stein, 1981; Efron, 1986) which forms an unbiased estimate of the prediction error by augmenting the observed training error with an estimate of the degrees of freedom of the estimator. Parameter tuning via SURE minimization has been advocated by many authors, in a wide variety of problem settings, and in general, it is natural to ask: what is the prediction error of the SURE-tuned estimator? An obvious strategy would be simply use the apparent error estimate as reported by SURE, i.e., the value of the SURE criterion at its minimum, to estimate the prediction error of the SURE-tuned estimator. But this is no longer unbiased; in fact, we would expect that the minimum of the SURE criterion is systematically biased downwards for the true prediction error. In this paper, we formally describe and study this bias.Comment: 39 pages, 3 figure

    Appoximation-assisted [sic] estimation of eigenvectors under quadratic loss

    Get PDF
    Improved estimation of eigen vector of covariance matrix is considered under uncertain prior information (UPI) regarding the parameter vector. Like statistical models underlying the statistical inferences to be made, the prior information will be susceptible to uncertainty and the practitioners may be reluctant to impose the additional information regarding parameters in the estimation process. A very large gain in precision may be achieved by judiciously exploiting the information about the parameters which in practice will be available in any realistic problem. Several estimators based on preliminary test and the Stein-type shrinkage rules are constructed. The expressions for the bias and risk of the proposed estimators are derived and compared with the usual estimators. We demonstrate that how the classical large sample theory of the conventional estimator can be extended to shrinkage and preliminary test estimators for the eigenvector of a covariance matrix. It is established that shrinkage estimators are asymptotically superior to the usual sample estimators. For illustration purposes, the method is applied to three datasets
    • …
    corecore