14,320 research outputs found

    Operational Rate-Distortion Performance of Single-source and Distributed Compressed Sensing

    Get PDF
    We consider correlated and distributed sources without cooperation at the encoder. For these sources, we derive the best achievable performance in the rate-distortion sense of any distributed compressed sensing scheme, under the constraint of high--rate quantization. Moreover, under this model we derive a closed--form expression of the rate gain achieved by taking into account the correlation of the sources at the receiver and a closed--form expression of the average performance of the oracle receiver for independent and joint reconstruction. Finally, we show experimentally that the exploitation of the correlation between the sources performs close to optimal and that the only penalty is due to the missing knowledge of the sparsity support as in (non distributed) compressed sensing. Even if the derivation is performed in the large system regime, where signal and system parameters tend to infinity, numerical results show that the equations match simulations for parameter values of practical interest.Comment: To appear in IEEE Transactions on Communication

    Quantization and Compressive Sensing

    Get PDF
    Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This book chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Specifically, we first provide a brief overview of quantization and examine fundamental performance bounds applicable to any quantization approach. Next, we consider several forms of scalar quantizers, namely uniform, non-uniform, and 1-bit. We provide performance bounds and fundamental analysis, as well as practical quantizer designs and reconstruction algorithms that account for quantization. Furthermore, we provide an overview of Sigma-Delta (ΣΔ\Sigma\Delta) quantization in the compressed sensing context, and also discuss implementation issues, recovery algorithms and performance bounds. As we demonstrate, proper accounting for quantization and careful quantizer design has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing and Its Applications", 201

    Distributed Representation of Geometrically Correlated Images with Compressed Linear Measurements

    Get PDF
    This paper addresses the problem of distributed coding of images whose correlation is driven by the motion of objects or positioning of the vision sensors. It concentrates on the problem where images are encoded with compressed linear measurements. We propose a geometry-based correlation model in order to describe the common information in pairs of images. We assume that the constitutive components of natural images can be captured by visual features that undergo local transformations (e.g., translation) in different images. We first identify prominent visual features by computing a sparse approximation of a reference image with a dictionary of geometric basis functions. We then pose a regularized optimization problem to estimate the corresponding features in correlated images given by quantized linear measurements. The estimated features have to comply with the compressed information and to represent consistent transformation between images. The correlation model is given by the relative geometric transformations between corresponding features. We then propose an efficient joint decoding algorithm that estimates the compressed images such that they stay consistent with both the quantized measurements and the correlation model. Experimental results show that the proposed algorithm effectively estimates the correlation between images in multi-view datasets. In addition, the proposed algorithm provides effective decoding performance that compares advantageously to independent coding solutions as well as state-of-the-art distributed coding schemes based on disparity learning

    "Compressed" Compressed Sensing

    Full text link
    The field of compressed sensing has shown that a sparse but otherwise arbitrary vector can be recovered exactly from a small number of randomly constructed linear projections (or samples). The question addressed in this paper is whether an even smaller number of samples is sufficient when there exists prior knowledge about the distribution of the unknown vector, or when only partial recovery is needed. An information-theoretic lower bound with connections to free probability theory and an upper bound corresponding to a computationally simple thresholding estimator are derived. It is shown that in certain cases (e.g. discrete valued vectors or large distortions) the number of samples can be decreased. Interestingly though, it is also shown that in many cases no reduction is possible

    Approximate Sparsity Pattern Recovery: Information-Theoretic Lower Bounds

    Full text link
    Recovery of the sparsity pattern (or support) of an unknown sparse vector from a small number of noisy linear measurements is an important problem in compressed sensing. In this paper, the high-dimensional setting is considered. It is shown that if the measurement rate and per-sample signal-to-noise ratio (SNR) are finite constants independent of the length of the vector, then the optimal sparsity pattern estimate will have a constant fraction of errors. Lower bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector. The tightness of the bounds in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing achievable bounds. Near optimality is shown for a wide variety of practically motivated signal models

    Blind Sensor Calibration using Approximate Message Passing

    Full text link
    The ubiquity of approximately sparse data has led a variety of com- munities to great interest in compressed sensing algorithms. Although these are very successful and well understood for linear measurements with additive noise, applying them on real data can be problematic if imperfect sensing devices introduce deviations from this ideal signal ac- quisition process, caused by sensor decalibration or failure. We propose a message passing algorithm called calibration approximate message passing (Cal-AMP) that can treat a variety of such sensor-induced imperfections. In addition to deriving the general form of the algorithm, we numerically investigate two particular settings. In the first, a fraction of the sensors is faulty, giving readings unrelated to the signal. In the second, sensors are decalibrated and each one introduces a different multiplicative gain to the measures. Cal-AMP shares the scalability of approximate message passing, allowing to treat big sized instances of these problems, and ex- perimentally exhibits a phase transition between domains of success and failure.Comment: 27 pages, 9 figure
    • …
    corecore