5,794 research outputs found

    Verifiable conditions of 1\ell_1-recovery of sparse signals with sign restrictions

    Full text link
    We propose necessary and sufficient conditions for a sensing matrix to be "s-semigood" -- to allow for exact 1\ell_1-recovery of sparse signals with at most ss nonzero entries under sign restrictions on part of the entries. We express the error bounds for imperfect 1\ell_1-recovery in terms of the characteristics underlying these conditions. Furthermore, we demonstrate that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse 1\ell_1-recovery and to efficiently computable upper bounds on those ss for which a given sensing matrix is ss-semigood. We concentrate on the properties of proposed verifiable sufficient conditions of ss-semigoodness and describe their limits of performance

    An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

    Full text link
    We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.Comment: 30 pages, 3 figures; this is version 2 - changes with respect to v1: small correction in proof (but not statement of) lemma 3.15; description of Besov spaces in intro and app A clarified (and corrected); smaller pointsize (making 30 instead of 38 pages

    Asymptotic minimaxity of False Discovery Rate thresholding for sparse exponential data

    Full text link
    We apply FDR thresholding to a non-Gaussian vector whose coordinates X_i, i=1,..., n, are independent exponential with individual means μi\mu_i. The vector μ=(μi)\mu =(\mu_i) is thought to be sparse, with most coordinates 1 but a small fraction significantly larger than 1; roughly, most coordinates are simply `noise,' but a small fraction contain `signal.' We measure risk by per-coordinate mean-squared error in recovering log(μi)\log(\mu_i), and study minimax estimation over parameter spaces defined by constraints on the per-coordinate p-norm of log(μi)\log(\mu_i): 1ni=1nlogp(μi)ηp\frac{1}{n}\sum_{i=1}^n\log^p(\mu_i)\leq \eta^p. We show for large n and small η\eta that FDR thresholding can be nearly Minimax. The FDR control parameter 0<q<1 plays an important role: when q1/2q\leq 1/2, the FDR estimator is nearly minimax, while choosing a fixed q>1/2 prevents near minimaxity. These conclusions mirror those found in the Gaussian case in Abramovich et al. [Ann. Statist. 34 (2006) 584--653]. The techniques developed here seem applicable to a wide range of other distributional assumptions, other loss measures and non-i.i.d. dependency structures.Comment: Published at http://dx.doi.org/10.1214/009053606000000920 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Higher criticism for detecting sparse heterogeneous mixtures

    Full text link
    Higher criticism, or second-level significance testing, is a multiple-comparisons concept mentioned in passing by Tukey. It concerns a situation where there are many independent tests of significance and one is interested in rejecting the joint null hypothesis. Tukey suggested comparing the fraction of observed significances at a given \alpha-level to the expected fraction under the joint null. In fact, he suggested standardizing the difference of the two quantities and forming a z-score; the resulting z-score tests the significance of the body of significance tests. We consider a generalization, where we maximize this z-score over a range of significance levels 0<\alpha\leq\alpha_0. We are able to show that the resulting higher criticism statistic is effective at resolving a very subtle testing problem: testing whether n normal means are all zero versus the alternative that a small fraction is nonzero. The subtlety of this ``sparse normal means'' testing problem can be seen from work of Ingster and Jin, who studied such problems in great detail. In their studies, they identified an interesting range of cases where the small fraction of nonzero means is so small that the alternative hypothesis exhibits little noticeable effect on the distribution of the p-values either for the bulk of the tests or for the few most highly significant tests. In this range, when the amplitude of nonzero means is calibrated with the fraction of nonzero means, the likelihood ratio test for a precisely specified alternative would still succeed in separating the two hypotheses.Comment: Published by the Institute of Mathematical Statistics (http://www.imstat.org) in the Annals of Statistics (http://www.imstat.org/aos/) at http://dx.doi.org/10.1214/00905360400000026

    Counting faces of randomly-projected polytopes when the projection radically lowers dimension

    Full text link
    This paper develops asymptotic methods to count faces of random high-dimensional polytopes. Beyond its intrinsic interest, our conclusions have surprising implications - in statistics, probability, information theory, and signal processing - with potential impacts in practical subjects like medical imaging and digital communications. Three such implications concern: convex hulls of Gaussian point clouds, signal recovery from random projections, and how many gross errors can be efficiently corrected from Gaussian error correcting codes.Comment: 56 page
    corecore