70 research outputs found

    Lossy Compression of Exponential and Laplacian Sources using Expansion Coding

    Full text link
    A general method of source coding over expansion is proposed in this paper, which enables one to reduce the problem of compressing an analog (continuous-valued source) to a set of much simpler problems, compressing discrete sources. Specifically, the focus is on lossy compression of exponential and Laplacian sources, which is subsequently expanded using a finite alphabet prior to being quantized. Due to decomposability property of such sources, the resulting random variables post expansion are independent and discrete. Thus, each of the expanded levels corresponds to an independent discrete source coding problem, and the original problem is reduced to coding over these parallel sources with a total distortion constraint. Any feasible solution to the optimization problem is an achievable rate distortion pair of the original continuous-valued source compression problem. Although finding the solution to this optimization problem at every distortion is hard, we show that our expansion coding scheme presents a good solution in the low distrotion regime. Further, by adopting low-complexity codes designed for discrete source coding, the total coding complexity can be tractable in practice.Comment: 8 pages, 3 figure

    Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation

    Full text link
    We study the compressed sensing (CS) signal estimation problem where an input signal is measured via a linear matrix multiplication under additive noise. While this setup usually assumes sparsity or compressibility in the input signal during recovery, the signal structure that can be leveraged is often not known a priori. In this paper, we consider universal CS recovery, where the statistics of a stationary ergodic signal source are estimated simultaneously with the signal itself. Inspired by Kolmogorov complexity and minimum description length, we focus on a maximum a posteriori (MAP) estimation framework that leverages universal priors to match the complexity of the source. Our framework can also be applied to general linear inverse problems where more measurements than in CS might be needed. We provide theoretical results that support the algorithmic feasibility of universal MAP estimation using a Markov chain Monte Carlo implementation, which is computationally challenging. We incorporate some techniques to accelerate the algorithm while providing comparable and in many cases better reconstruction quality than existing algorithms. Experimental results show the promise of universality in CS, particularly for low-complexity sources that do not exhibit standard sparsity or compressibility.Comment: 29 pages, 8 figure
    corecore