71 research outputs found

    Quantization and Compressive Sensing

    Get PDF
    Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This book chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Specifically, we first provide a brief overview of quantization and examine fundamental performance bounds applicable to any quantization approach. Next, we consider several forms of scalar quantizers, namely uniform, non-uniform, and 1-bit. We provide performance bounds and fundamental analysis, as well as practical quantizer designs and reconstruction algorithms that account for quantization. Furthermore, we provide an overview of Sigma-Delta (ΣΔ\Sigma\Delta) quantization in the compressed sensing context, and also discuss implementation issues, recovery algorithms and performance bounds. As we demonstrate, proper accounting for quantization and careful quantizer design has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing and Its Applications", 201

    Compressed sensing performance bounds under Poisson noise

    Full text link
    This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is non-additive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical 2\ell_2--1\ell_1 minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the signal-dependent part of the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.Comment: 12 pages, 3 pdf figures; accepted for publication in IEEE Transactions on Signal Processin

    Modulated Unit-Norm Tight Frames for Compressed Sensing

    Full text link
    In this paper, we propose a compressed sensing (CS) framework that consists of three parts: a unit-norm tight frame (UTF), a random diagonal matrix and a column-wise orthonormal matrix. We prove that this structure satisfies the restricted isometry property (RIP) with high probability if the number of measurements m=O(slog2slog2n)m = O(s \log^2s \log^2n) for ss-sparse signals of length nn and if the column-wise orthonormal matrix is bounded. Some existing structured sensing models can be studied under this framework, which then gives tighter bounds on the required number of measurements to satisfy the RIP. More importantly, we propose several structured sensing models by appealing to this unified framework, such as a general sensing model with arbitrary/determinisic subsamplers, a fast and efficient block compressed sensing scheme, and structured sensing matrices with deterministic phase modulations, all of which can lead to improvements on practical applications. In particular, one of the constructions is applied to simplify the transceiver design of CS-based channel estimation for orthogonal frequency division multiplexing (OFDM) systems.Comment: submitted to IEEE Transactions on Signal Processin

    Compressive Sensing of Analog Signals Using Discrete Prolate Spheroidal Sequences

    Full text link
    Compressive sensing (CS) has recently emerged as a framework for efficiently capturing signals that are sparse or compressible in an appropriate basis. While often motivated as an alternative to Nyquist-rate sampling, there remains a gap between the discrete, finite-dimensional CS framework and the problem of acquiring a continuous-time signal. In this paper, we attempt to bridge this gap by exploiting the Discrete Prolate Spheroidal Sequences (DPSS's), a collection of functions that trace back to the seminal work by Slepian, Landau, and Pollack on the effects of time-limiting and bandlimiting operations. DPSS's form a highly efficient basis for sampled bandlimited functions; by modulating and merging DPSS bases, we obtain a dictionary that offers high-quality sparse approximations for most sampled multiband signals. This multiband modulated DPSS dictionary can be readily incorporated into the CS framework. We provide theoretical guarantees and practical insight into the use of this dictionary for recovery of sampled multiband signals from compressive measurements

    Sharp Time--Data Tradeoffs for Linear Inverse Problems

    Full text link
    In this paper we characterize sharp time-data tradeoffs for optimization problems used for solving linear inverse problems. We focus on the minimization of a least-squares objective subject to a constraint defined as the sub-level set of a penalty function. We present a unified convergence analysis of the gradient projection algorithm applied to such problems. We sharply characterize the convergence rate associated with a wide variety of random measurement ensembles in terms of the number of measurements and structural complexity of the signal with respect to the chosen penalty function. The results apply to both convex and nonconvex constraints, demonstrating that a linear convergence rate is attainable even though the least squares objective is not strongly convex in these settings. When specialized to Gaussian measurements our results show that such linear convergence occurs when the number of measurements is merely 4 times the minimal number required to recover the desired signal at all (a.k.a. the phase transition). We also achieve a slower but geometric rate of convergence precisely above the phase transition point. Extensive numerical results suggest that the derived rates exactly match the empirical performance

    Sampling of graph signals via randomized local aggregations

    Get PDF
    Sampling of signals defined over the nodes of a graph is one of the crucial problems in graph signal processing. While in classical signal processing sampling is a well defined operation, when we consider a graph signal many new challenges arise and defining an efficient sampling strategy is not straightforward. Recently, several works have addressed this problem. The most common techniques select a subset of nodes to reconstruct the entire signal. However, such methods often require the knowledge of the signal support and the computation of the sparsity basis before sampling. Instead, in this paper we propose a new approach to this issue. We introduce a novel technique that combines localized sampling with compressed sensing. We first choose a subset of nodes and then, for each node of the subset, we compute random linear combinations of signal coefficients localized at the node itself and its neighborhood. The proposed method provides theoretical guarantees in terms of reconstruction and stability to noise for any graph and any orthonormal basis, even when the support is not known.Comment: IEEE Transactions on Signal and Information Processing over Networks, 201
    corecore