70 research outputs found
Lossy Compression of Exponential and Laplacian Sources using Expansion Coding
A general method of source coding over expansion is proposed in this paper,
which enables one to reduce the problem of compressing an analog
(continuous-valued source) to a set of much simpler problems, compressing
discrete sources. Specifically, the focus is on lossy compression of
exponential and Laplacian sources, which is subsequently expanded using a
finite alphabet prior to being quantized. Due to decomposability property of
such sources, the resulting random variables post expansion are independent and
discrete. Thus, each of the expanded levels corresponds to an independent
discrete source coding problem, and the original problem is reduced to coding
over these parallel sources with a total distortion constraint. Any feasible
solution to the optimization problem is an achievable rate distortion pair of
the original continuous-valued source compression problem. Although finding the
solution to this optimization problem at every distortion is hard, we show that
our expansion coding scheme presents a good solution in the low distrotion
regime. Further, by adopting low-complexity codes designed for discrete source
coding, the total coding complexity can be tractable in practice.Comment: 8 pages, 3 figure
Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation
We study the compressed sensing (CS) signal estimation problem where an input
signal is measured via a linear matrix multiplication under additive noise.
While this setup usually assumes sparsity or compressibility in the input
signal during recovery, the signal structure that can be leveraged is often not
known a priori. In this paper, we consider universal CS recovery, where the
statistics of a stationary ergodic signal source are estimated simultaneously
with the signal itself. Inspired by Kolmogorov complexity and minimum
description length, we focus on a maximum a posteriori (MAP) estimation
framework that leverages universal priors to match the complexity of the
source. Our framework can also be applied to general linear inverse problems
where more measurements than in CS might be needed. We provide theoretical
results that support the algorithmic feasibility of universal MAP estimation
using a Markov chain Monte Carlo implementation, which is computationally
challenging. We incorporate some techniques to accelerate the algorithm while
providing comparable and in many cases better reconstruction quality than
existing algorithms. Experimental results show the promise of universality in
CS, particularly for low-complexity sources that do not exhibit standard
sparsity or compressibility.Comment: 29 pages, 8 figure
Fourth Workshop on Information Theoretic Methods in Science and Engineering : Proceedings
Peer reviewe
Recommended from our members
Coding mechanisms for communication and compression : analysis of wireless channels and DNA sequencing
textThis thesis comprises of two related but distinct components: Coding arguments for communication channels and information-theoretic analysis for haplotype assembly. The common thread for both problems is utilizing information and coding theoretic principles in understanding their underlying mechanisms. For the first class of problems, I study two practical challenges that prevent optimal discrete codes utilizing in real communication and compression systems, namely, coding over analog alphabet and fading. In particular, I use an expansion coding scheme to convert the original analog channel coding and source coding problems into a set of independent discrete subproblems. By adopting optimal discrete codes over the expanded levels, this low-complexity coding scheme can approach Shannon limit perfectly or in ratio. Meanwhile, I design a polar coding scheme to deal with the unstable state of fading channels. This novel coding mechanism of hierarchically utilizing different types of polar codes has been proved to be ergodic capacity achievable for several fading systems, without channel state information known at the transmitter. For the second class of problems, I build an information-theoretic view for haplotype assembly. More precisely, the recovery of the target pair of haplotype sequences using short reads is rephrased as the joint source-channel coding problem. Two binary messages, representing haplotypes and chromosome memberships of reads, are encoded and transmitted over a channel with erasures and errors, where the channel model reflects salient features of highthroughput sequencing. The focus is on determining the required number of reads for reliable haplotype reconstruction.Electrical and Computer Engineerin
- …