331 research outputs found

    Estimation of sums of random variables: Examples and information bounds

    Full text link
    This paper concerns the estimation of sums of functions of observable and unobservable variables. Lower bounds for the asymptotic variance and a convolution theorem are derived in general finite- and infinite-dimensional models. An explicit relationship is established between efficient influence functions for the estimation of sums of variables and the estimation of their means. Certain ``plug-in'' estimators are proved to be asymptotically efficient in finite-dimensional models, while ``u,vu,v'' estimators of Robbins are proved to be efficient in infinite-dimensional mixture models. Examples include certain species, network and data confidentiality problems.Comment: Published at http://dx.doi.org/10.1214/009053605000000390 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    General empirical Bayes wavelet methods and exactly adaptive minimax estimation

    Full text link
    In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risks and exact minimax risks in broad collections of classes of signals. In particular, our estimators are uniformly adaptive to the minimum risk of separable estimators and the exact minimax risks simultaneously in Besov balls of all smoothness and shape indices, and they are uniformly superefficient in convergence rates in all compact sets in Besov spaces with a finite secondary shape parameter. Furthermore, in classes nested between Besov balls of the same smoothness index, our estimators dominate threshold and James-Stein estimators within an infinitesimal fraction of the minimax risks. More general block empirical Bayes estimators are developed. Both white noise with drift and nonparametric regression are considered.Comment: Published at http://dx.doi.org/10.1214/009053604000000995 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Discussion: One-step sparse estimates in nonconcave penalized likelihood models

    Full text link
    Discussion of ``One-step sparse estimates in nonconcave penalized likelihood models'' [arXiv:0808.1012]Comment: Published in at http://dx.doi.org/10.1214/07-AOS0316C the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A General Framework of Dual Certificate Analysis for Structured Sparse Recovery Problems

    Full text link
    This paper develops a general theoretical framework to analyze structured sparse recovery problems using the notation of dual certificate. Although certain aspects of the dual certificate idea have already been used in some previous work, due to the lack of a general and coherent theory, the analysis has so far only been carried out in limited scopes for specific problems. In this context the current paper makes two contributions. First, we introduce a general definition of dual certificate, which we then use to develop a unified theory of sparse recovery analysis for convex programming. Second, we present a class of structured sparsity regularization called structured Lasso for which calculations can be readily performed under our theoretical framework. This new theory includes many seemingly loosely related previous work as special cases; it also implies new results that improve existing ones even for standard formulations such as L1 regularization

    General maximum likelihood empirical Bayes estimation of normal means

    Full text link
    We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal fraction of the minimum average MSE among all separable estimators which use a single deterministic estimating function on individual observations, provided that the risk is of greater order than (logn)5/n(\log n)^5/n. We also prove that the GMLEB is uniformly approximately minimax in regular and weak p\ell_p balls when the order of the length-normalized norm of the unknown means is between (logn)κ1/n1/(p2)(\log n)^{\kappa_1}/n^{1/(p\wedge2)} and n/(logn)κ2n/(\log n)^{\kappa_2}. Simulation experiments demonstrate that the GMLEB outperforms the James--Stein and several state-of-the-art threshold estimators in a wide range of settings without much down side.Comment: Published in at http://dx.doi.org/10.1214/08-AOS638 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Calibrated Elastic Regularization in Matrix Completion

    Full text link
    This paper concerns the problem of matrix completion, which is to estimate a matrix from observations in a small subset of indices. We propose a calibrated spectrum elastic net method with a sum of the nuclear and Frobenius penalties and develop an iterative algorithm to solve the convex minimization problem. The iterative algorithm alternates between imputing the missing entries in the incomplete matrix by the current guess and estimating the matrix by a scaled soft-thresholding singular value decomposition of the imputed matrix until the resulting matrix converges. A calibration step follows to correct the bias caused by the Frobenius penalty. Under proper coherence conditions and for suitable penalties levels, we prove that the proposed estimator achieves an error bound of nearly optimal order and in proportion to the noise level. This provides a unified analysis of the noisy and noiseless matrix completion problems. Simulation results are presented to compare our proposal with previous ones.Comment: 9 pages; Advances in Neural Information Processing Systems, NIPS 201
    corecore