12 research outputs found

    Syndrome-Based Encoding of Compressible Sources for M2M Communication

    Get PDF
    Data originating from many devices and sensors can be modeled as sparse signals. Hence, efficient compression techniques of such data are essential to reduce bandwidth and transmission power, especially for energy constrained devices within machine to machine communication scenarios. This paper provides accurate analysis of the operational distortion-rate function (ODR) for syndrome-based source encoders of noisy sparse sources. We derive the probability density function of error due to both quantization and pre- quantization noise for a type of mixed distributed source comprising Bernoulli and an arbitrary continuous distribution, e.g., Bernoulli- uniform sources. Then, we derive the ODR for two encoding schemes based on the syndromes of Reed-Solomon (RS) and Bose, Chaudhuri, and Hocquenghem (BCH) codes. The presented analysis allows designing a quantizer such that a target average distortion is achieved. As confirmed by numerical results, the closed-form expression for ODR perfectly coincides with the simulation. Also, the performance loss compared to an entropy based encoder is tolerable

    Compression-Based Compressed Sensing

    Full text link
    Modern compression algorithms exploit complex structures that are present in signals to describe them very efficiently. On the other hand, the field of compressed sensing is built upon the observation that "structured" signals can be recovered from their under-determined set of linear projections. Currently, there is a large gap between the complexity of the structures studied in the area of compressed sensing and those employed by the state-of-the-art compression codes. Recent results in the literature on deterministic signals aim at bridging this gap through devising compressed sensing decoders that employ compression codes. This paper focuses on structured stochastic processes and studies the application of rate-distortion codes to compressed sensing of such signals. The performance of the formerly-proposed compressible signal pursuit (CSP) algorithm is studied in this stochastic setting. It is proved that in the very low distortion regime, as the blocklength grows to infinity, the CSP algorithm reliably and robustly recovers nn instances of a stationary process from random linear projections as long as their count is slightly more than nn times the rate-distortion dimension (RDD) of the source. It is also shown that under some regularity conditions, the RDD of a stationary process is equal to its information dimension (ID). This connection establishes the optimality of the CSP algorithm at least for memoryless stationary sources, for which the fundamental limits are known. Finally, it is shown that the CSP algorithm combined by a family of universal variable-length fixed-distortion compression codes yields a family of universal compressed sensing recovery algorithms

    Remote Source Coding under Gaussian Noise : Dueling Roles of Power and Entropy Power

    Full text link
    The distributed remote source coding (so-called CEO) problem is studied in the case where the underlying source, not necessarily Gaussian, has finite differential entropy and the observation noise is Gaussian. The main result is a new lower bound for the sum-rate-distortion function under arbitrary distortion measures. When specialized to the case of mean-squared error, it is shown that the bound exactly mirrors a corresponding upper bound, except that the upper bound has the source power (variance) whereas the lower bound has the source entropy power. Bounds exhibiting this pleasing duality of power and entropy power have been well known for direct and centralized source coding since Shannon's work. While the bounds hold generally, their value is most pronounced when interpreted as a function of the number of agents in the CEO problem

    Rate Distortion Behavior of Sparse Sources

    Get PDF
    This paper studies the rate distortion behavior of sparse memoryless sources that serve as models of sparse signal representations. For the Hamming distortion criterion, R(D)R(D) is shown to be essentially linear. For the mean squared error measure, two models are analyzed: the mixed discrete/continuous spike processes and Gaussian mixtures. The latter are shown to be a better model for ``natural'' data such as sparse wavelet coefficients. Finally, the geometric mean of a continuous random variable is introduced as a sparseness measure. It yields upper and lower bounds on the entropy and thus characterizes high-rate R(D)R(D)

    Optimal Phase Transitions in Compressed Sensing

    Full text link
    Compressed sensing deals with efficient recovery of analog signals from linear encodings. This paper presents a statistical study of compressed sensing by modeling the input signal as an i.i.d. process with known distribution. Three classes of encoders are considered, namely optimal nonlinear, optimal linear and random linear encoders. Focusing on optimal decoders, we investigate the fundamental tradeoff between measurement rate and reconstruction fidelity gauged by error probability and noise sensitivity in the absence and presence of measurement noise, respectively. The optimal phase transition threshold is determined as a functional of the input distribution and compared to suboptimal thresholds achieved by popular reconstruction algorithms. In particular, we show that Gaussian sensing matrices incur no penalty on the phase transition threshold with respect to optimal nonlinear encoding. Our results also provide a rigorous justification of previous results based on replica heuristics in the weak-noise regime.Comment: to appear in IEEE Transactions of Information Theor

    Distributed Scalar Quantization for Computing: High-Resolution Analysis and Extensions

    Get PDF
    Communication of quantized information is frequently followed by a computation. We consider situations of \emph{distributed functional scalar quantization}: distributed scalar quantization of (possibly correlated) sources followed by centralized computation of a function. Under smoothness conditions on the sources and function, companding scalar quantizer designs are developed to minimize mean-squared error (MSE) of the computed function as the quantizer resolution is allowed to grow. Striking improvements over quantizers designed without consideration of the function are possible and are larger in the entropy-constrained setting than in the fixed-rate setting. As extensions to the basic analysis, we characterize a large class of functions for which regular quantization suffices, consider certain functions for which asymptotic optimality is achieved without arbitrarily fine quantization, and allow limited collaboration between source encoders. In the entropy-constrained setting, a single bit per sample communicated between encoders can have an arbitrarily-large effect on functional distortion. In contrast, such communication has very little effect in the fixed-rate setting.Comment: 36 pages, 10 figure

    On the Rate-Distortion Function of Random Vectors and Stationary Sources with Mixed Distributions

    No full text
    The asymptotic (small distortion) behavior of the ratedistortion function of an n-dimensional source vector with mixed distribution is derived. The source distribution is a finite mixture of components such that under each component distribution a certain subset of the coordinates have a discrete distribution while the remaining coordinates have a joint density. The expected number of coordinates with a joint density is shown to equal the rate-distortion dimension of the source vector. Also, the exact small distortion asymptotic behavior of the rate-distortion function of a special but interesting class of stationary information sources is determined
    corecore