2,235 research outputs found

    Optimal Linear Shrinkage Estimator for Large Dimensional Precision Matrix

    Full text link
    In this work we construct an optimal shrinkage estimator for the precision matrix in high dimensions. We consider the general asymptotics when the number of variables p→∞p\rightarrow\infty and the sample size n→∞n\rightarrow\infty so that p/n→c∈(0,+∞)p/n\rightarrow c\in (0, +\infty). The precision matrix is estimated directly, without inverting the corresponding estimator for the covariance matrix. The recent results from the random matrix theory allow us to find the asymptotic deterministic equivalents of the optimal shrinkage intensities and estimate them consistently. The resulting distribution-free estimator has almost surely the minimum Frobenius loss. Additionally, we prove that the Frobenius norms of the inverse and of the pseudo-inverse sample covariance matrices tend almost surely to deterministic quantities and estimate them consistently. At the end, a simulation is provided where the suggested estimator is compared with the estimators for the precision matrix proposed in the literature. The optimal shrinkage estimator shows significant improvement and robustness even for non-normally distributed data.Comment: 26 pages, 5 figures. This version includes the case c>1 with the generalized inverse of the sample covariance matrix. The abstract was updated accordingl

    Improved estimation of the mean vector for Student-t model.

    Get PDF
    Improved James-Stein type estimation of the mean vector \mbox{\boldmath \mu} of a multivariate Student-t population of dimension p with ν\nu degrees of freedom is considered. In addition to the sample data, uncertain prior information on the value of the mean vector, in the form of a null hypothesis, is used for the estimation. The usual maximum likelihood estimator (mle) of \mbox{\boldmath \mu} is obtained and a test statistic for testing H_0: \mbox{\boldmath \mu} = \mbox{\boldmath \mu}_0 is derived. Based on the mle of \mbox{\boldmath \mu} and the test statistic the preliminary test estimator (PTE), Stein-type shrinkage estimator (SE) and positive-rule shrinkage estimator (PRSE) are defined. The bias and the quadratic risk of the estimators are evaluated. The relative performances of the estimators are investigated by analyzing the risks under different conditions. It is observed that the PRSE dominates over the other three estimators, regardless of the validity of the null hypothesis and the value $\nu.

    The generalized shrinkage estimator for the analysis of functional connectivity of brain signals

    Full text link
    We develop a new statistical method for estimating functional connectivity between neurophysiological signals represented by a multivariate time series. We use partial coherence as the measure of functional connectivity. Partial coherence identifies the frequency bands that drive the direct linear association between any pair of channels. To estimate partial coherence, one would first need an estimate of the spectral density matrix of the multivariate time series. Parametric estimators of the spectral density matrix provide good frequency resolution but could be sensitive when the parametric model is misspecified. Smoothing-based nonparametric estimators are robust to model misspecification and are consistent but may have poor frequency resolution. In this work, we develop the generalized shrinkage estimator, which is a weighted average of a parametric estimator and a nonparametric estimator. The optimal weights are frequency-specific and derived under the quadratic risk criterion so that the estimator, either the parametric estimator or the nonparametric estimator, that performs better at a particular frequency receives heavier weight. We validate the proposed estimator in a simulation study and apply it on electroencephalogram recordings from a visual-motor experiment.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS396 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Nonlinear shrinkage estimation of large-dimensional covariance matrices

    Full text link
    Many statistical applications require an estimate of a covariance matrix and/or its inverse. When the matrix dimension is large compared to the sample size, which happens frequently, the sample covariance matrix is known to perform poorly and may suffer from ill-conditioning. There already exists an extensive literature concerning improved estimators in such situations. In the absence of further knowledge about the structure of the true covariance matrix, the most successful approach so far, arguably, has been shrinkage estimation. Shrinking the sample covariance matrix to a multiple of the identity, by taking a weighted average of the two, turns out to be equivalent to linearly shrinking the sample eigenvalues to their grand mean, while retaining the sample eigenvectors. Our paper extends this approach by considering nonlinear transformations of the sample eigenvalues. We show how to construct an estimator that is asymptotically equivalent to an oracle estimator suggested in previous work. As demonstrated in extensive Monte Carlo simulations, the resulting bona fide estimator can result in sizeable improvements over the sample covariance matrix and also over linear shrinkage.Comment: Published in at http://dx.doi.org/10.1214/12-AOS989 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Generalized robust shrinkage estimator and its application to STAP detection problem

    Full text link
    Recently, in the context of covariance matrix estimation, in order to improve as well as to regularize the performance of the Tyler's estimator [1] also called the Fixed-Point Estimator (FPE) [2], a "shrinkage" fixed-point estimator has been introduced in [3]. First, this work extends the results of [3,4] by giving the general solution of the "shrinkage" fixed-point algorithm. Secondly, by analyzing this solution, called the generalized robust shrinkage estimator, we prove that this solution converges to a unique solution when the shrinkage parameter β\beta (losing factor) tends to 0. This solution is exactly the FPE with the trace of its inverse equal to the dimension of the problem. This general result allows one to give another interpretation of the FPE and more generally, on the Maximum Likelihood approach for covariance matrix estimation when constraints are added. Then, some simulations illustrate our theoretical results as well as the way to choose an optimal shrinkage factor. Finally, this work is applied to a Space-Time Adaptive Processing (STAP) detection problem on real STAP data

    Kernel Mean Shrinkage Estimators

    Get PDF
    A mean function in a reproducing kernel Hilbert space (RKHS), or a kernel mean, is central to kernel methods in that it is used by many classical algorithms such as kernel principal component analysis, and it also forms the core inference step of modern kernel methods that rely on embedding probability distributions in RKHSs. Given a finite sample, an empirical average has been used commonly as a standard estimator of the true kernel mean. Despite a widespread use of this estimator, we show that it can be improved thanks to the well-known Stein phenomenon. We propose a new family of estimators called kernel mean shrinkage estimators (KMSEs), which benefit from both theoretical justifications and good empirical performance. The results demonstrate that the proposed estimators outperform the standard one, especially in a "large d, small n" paradigm.Comment: 41 page
    • …
    corecore