16,864 research outputs found

    Sparse recovery and Fourier sampling

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 155-160).In the last decade a broad literature has arisen studying sparse recovery, the estimation of sparse vectors from low dimensional linear projections. Sparse recovery has a wide variety of applications such as streaming algorithms, image acquisition, and disease testing. A particularly important subclass of sparse recovery is the sparse Fourier transform, which considers the computation of a discrete Fourier transform when the output is sparse. Applications of the sparse Fourier transform include medical imaging, spectrum sensing, and purely computation tasks involving convolution. This thesis describes a coherent set of techniques that achieve optimal or near-optimal upper and lower bounds for a variety of sparse recovery problems. We give the following state-of-the-art algorithms for recovery of an approximately k-sparse vector in n dimensions: -- Two sparse Fourier transform algorithms, respectively taking ... time and ... samples. The latter is within log e log n of the optimal sample complexity when ... -- An algorithm for adaptive sparse recovery using ... measurements, showing that adaptivity can give substantial improvements when k is small. -- An algorithm for C-approximate sparse recovery with ... measurements, which matches our lower bound up to the log* k factor and gives the first improvement for ... In the second part of this thesis, we give lower bounds for the above problems and more.by Eric Price.Ph. D

    Discovery of low-dimensional structure in high-dimensional inference problems

    Full text link
    Many learning and inference problems involve high-dimensional data such as images, video or genomic data, which cannot be processed efficiently using conventional methods due to their dimensionality. However, high-dimensional data often exhibit an inherent low-dimensional structure, for instance they can often be represented sparsely in some basis or domain. The discovery of an underlying low-dimensional structure is important to develop more robust and efficient analysis and processing algorithms. The first part of the dissertation investigates the statistical complexity of sparse recovery problems, including sparse linear and nonlinear regression models, feature selection and graph estimation. We present a framework that unifies sparse recovery problems and construct an analogy to channel coding in classical information theory. We perform an information-theoretic analysis to derive bounds on the number of samples required to reliably recover sparsity patterns independent of any specific recovery algorithm. In particular, we show that sample complexity can be tightly characterized using a mutual information formula similar to channel coding results. Next, we derive major extensions to this framework, including dependent input variables and a lower bound for sequential adaptive recovery schemes, which helps determine whether adaptivity provides performance gains. We compute statistical complexity bounds for various sparse recovery problems, showing our analysis improves upon the existing bounds and leads to intuitive results for new applications. In the second part, we investigate methods for improving the computational complexity of subgraph detection in graph-structured data, where we aim to discover anomalous patterns present in a connected subgraph of a given graph. This problem arises in many applications such as detection of network intrusions, community detection, detection of anomalous events in surveillance videos or disease outbreaks. Since optimization over connected subgraphs is a combinatorial and computationally difficult problem, we propose a convex relaxation that offers a principled approach to incorporating connectivity and conductance constraints on candidate subgraphs. We develop a novel nearly-linear time algorithm to solve the relaxed problem, establish convergence and consistency guarantees and demonstrate its feasibility and performance with experiments on real networks

    Adaptive Compressed Sensing for Support Recovery of Structured Sparse Sets

    Get PDF
    This paper investigates the problem of recovering the support of structured signals via adaptive compressive sensing. We examine several classes of structured support sets, and characterize the fundamental limits of accurately recovering such sets through compressive measurements, while simultaneously providing adaptive support recovery protocols that perform near optimally for these classes. We show that by adaptively designing the sensing matrix we can attain significant performance gains over non-adaptive protocols. These gains arise from the fact that adaptive sensing can: (i) better mitigate the effects of noise, and (ii) better capitalize on the structure of the support sets.Comment: to appear in IEEE Transactions on Information Theor

    Adaptive sensing performance lower bounds for sparse signal detection and support estimation

    Get PDF
    This paper gives a precise characterization of the fundamental limits of adaptive sensing for diverse estimation and testing problems concerning sparse signals. We consider in particular the setting introduced in (IEEE Trans. Inform. Theory 57 (2011) 6222-6235) and show necessary conditions on the minimum signal magnitude for both detection and estimation: if x∈Rn{\mathbf {x}}\in \mathbb{R}^n is a sparse vector with ss non-zero components then it can be reliably detected in noise provided the magnitude of the non-zero components exceeds 2/s\sqrt{2/s}. Furthermore, the signal support can be exactly identified provided the minimum magnitude exceeds 2log⁑s\sqrt{2\log s}. Notably there is no dependence on nn, the extrinsic signal dimension. These results show that the adaptive sensing methodologies proposed previously in the literature are essentially optimal, and cannot be substantially improved. In addition, these results provide further insights on the limits of adaptive compressive sensing.Comment: Published in at http://dx.doi.org/10.3150/13-BEJ555 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    On Finding a Subset of Healthy Individuals from a Large Population

    Full text link
    In this paper, we derive mutual information based upper and lower bounds on the number of nonadaptive group tests required to identify a given number of "non defective" items from a large population containing a small number of "defective" items. We show that a reduction in the number of tests is achievable compared to the approach of first identifying all the defective items and then picking the required number of non-defective items from the complement set. In the asymptotic regime with the population size Nβ†’βˆžN \rightarrow \infty, to identify LL non-defective items out of a population containing KK defective items, when the tests are reliable, our results show that CsK1βˆ’o(1)(Ξ¦(Ξ±0,Ξ²0)+o(1))\frac{C_s K}{1-o(1)} (\Phi(\alpha_0, \beta_0) + o(1)) measurements are sufficient, where CsC_s is a constant independent of N,KN, K and LL, and Ξ¦(Ξ±0,Ξ²0)\Phi(\alpha_0, \beta_0) is a bounded function of Ξ±0β‰œlim⁑Nβ†’βˆžLNβˆ’K\alpha_0 \triangleq \lim_{N\rightarrow \infty} \frac{L}{N-K} and Ξ²0β‰œlim⁑Nβ†’βˆžKNβˆ’K\beta_0 \triangleq \lim_{N\rightarrow \infty} \frac{K} {N-K}. Further, in the nonadaptive group testing setup, we obtain rigorous upper and lower bounds on the number of tests under both dilution and additive noise models. Our results are derived using a general sparse signal model, by virtue of which, they are also applicable to other important sparse signal based applications such as compressive sensing.Comment: 32 pages, 2 figures, 3 tables, revised version of a paper submitted to IEEE Trans. Inf. Theor

    Improved Bounds for Universal One-Bit Compressive Sensing

    Full text link
    Unlike compressive sensing where the measurement outputs are assumed to be real-valued and have infinite precision, in "one-bit compressive sensing", measurements are quantized to one bit, their signs. In this work, we show how to recover the support of sparse high-dimensional vectors in the one-bit compressive sensing framework with an asymptotically near-optimal number of measurements. We also improve the bounds on the number of measurements for approximately recovering vectors from one-bit compressive sensing measurements. Our results are universal, namely the same measurement scheme works simultaneously for all sparse vectors. Our proof of optimality for support recovery is obtained by showing an equivalence between the task of support recovery using 1-bit compressive sensing and a well-studied combinatorial object known as Union Free Families.Comment: 14 page
    • …
    corecore