2,747 research outputs found

    Compressive Sensing of Analog Signals Using Discrete Prolate Spheroidal Sequences

    Full text link
    Compressive sensing (CS) has recently emerged as a framework for efficiently capturing signals that are sparse or compressible in an appropriate basis. While often motivated as an alternative to Nyquist-rate sampling, there remains a gap between the discrete, finite-dimensional CS framework and the problem of acquiring a continuous-time signal. In this paper, we attempt to bridge this gap by exploiting the Discrete Prolate Spheroidal Sequences (DPSS's), a collection of functions that trace back to the seminal work by Slepian, Landau, and Pollack on the effects of time-limiting and bandlimiting operations. DPSS's form a highly efficient basis for sampled bandlimited functions; by modulating and merging DPSS bases, we obtain a dictionary that offers high-quality sparse approximations for most sampled multiband signals. This multiband modulated DPSS dictionary can be readily incorporated into the CS framework. We provide theoretical guarantees and practical insight into the use of this dictionary for recovery of sampled multiband signals from compressive measurements

    Uniform Recovery from Subgaussian Multi-Sensor Measurements

    Full text link
    Parallel acquisition systems are employed successfully in a variety of different sensing applications when a single sensor cannot provide enough measurements for a high-quality reconstruction. In this paper, we consider compressed sensing (CS) for parallel acquisition systems when the individual sensors use subgaussian random sampling. Our main results are a series of uniform recovery guarantees which relate the number of measurements required to the basis in which the solution is sparse and certain characteristics of the multi-sensor system, known as sensor profile matrices. In particular, we derive sufficient conditions for optimal recovery, in the sense that the number of measurements required per sensor decreases linearly with the total number of sensors, and demonstrate explicit examples of multi-sensor systems for which this holds. We establish these results by proving the so-called Asymmetric Restricted Isometry Property (ARIP) for the sensing system and use this to derive both nonuniversal and universal recovery guarantees. Compared to existing work, our results not only lead to better stability and robustness estimates but also provide simpler and sharper constants in the measurement conditions. Finally, we show how the problem of CS with block-diagonal sensing matrices can be viewed as a particular case of our multi-sensor framework. Specializing our results to this setting leads to a recovery guarantee that is at least as good as existing results.Comment: 37 pages, 5 figure

    Statistical Compressive Sensing of Gaussian Mixture Models

    Full text link
    A new framework of compressive sensing (CS), namely statistical compressive sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution and achieving accurate reconstruction on average, is introduced. For signals following a Gaussian distribution, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS, where N is the signal dimension, and with an optimal decoder implemented with linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the k-best term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the k-best term approximation with probability one, and the bound constant can be efficiently calculated. For signals following Gaussian mixture models, SCS with a piecewise linear decoder is introduced and shown to produce for real images better results than conventional CS based on sparse models

    Sketching for Large-Scale Learning of Mixture Models

    Get PDF
    Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. We propose a "compressive learning" framework where we estimate model parameters from a sketch of the training data. This sketch is a collection of generalized moments of the underlying probability distribution of the data. It can be computed in a single pass on the training set, and is easily computable on streams or distributed datasets. The proposed framework shares similarities with compressive sensing, which aims at drastically reducing the dimension of high-dimensional signals while preserving the ability to reconstruct them. To perform the estimation task, we derive an iterative algorithm analogous to sparse reconstruction algorithms in the context of linear inverse problems. We exemplify our framework with the compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics on the choice of the sketching procedure and theoretical guarantees of reconstruction. We experimentally show on synthetic data that the proposed algorithm yields results comparable to the classical Expectation-Maximization (EM) technique while requiring significantly less memory and fewer computations when the number of database elements is large. We further demonstrate the potential of the approach on real large-scale data (over 10 8 training samples) for the task of model-based speaker verification. Finally, we draw some connections between the proposed framework and approximate Hilbert space embedding of probability distributions using random features. We show that the proposed sketching operator can be seen as an innovative method to design translation-invariant kernels adapted to the analysis of GMMs. We also use this theoretical framework to derive information preservation guarantees, in the spirit of infinite-dimensional compressive sensing
    • …
    corecore