4,304 research outputs found
A Universal Scheme for Wyner–Ziv Coding of Discrete Sources
We consider the Wyner–Ziv (WZ) problem of lossy compression where the decompressor observes a noisy version of the source, whose statistics are unknown. A new family of WZ coding algorithms is proposed and their universal optimality is proven. Compression consists of sliding-window processing followed by Lempel–Ziv (LZ) compression, while the decompressor is based on a modification of the discrete universal denoiser (DUDE) algorithm to take advantage of side information. The new algorithms not only universally attain the fundamental limits, but also suggest a paradigm for practical WZ coding. The effectiveness of our approach is illustrated with experiments on binary images, and English text using a low complexity algorithm motivated by our class of universally optimal WZ codes
Generative Compression
Traditional image and video compression algorithms rely on hand-crafted
encoder/decoder pairs (codecs) that lack adaptability and are agnostic to the
data being compressed. Here we describe the concept of generative compression,
the compression of data using generative models, and suggest that it is a
direction worth pursuing to produce more accurate and visually pleasing
reconstructions at much deeper compression levels for both image and video
data. We also demonstrate that generative compression is orders-of-magnitude
more resilient to bit error rates (e.g. from noisy wireless channels) than
traditional variable-length coding schemes
Optimizing Image Compression via Joint Learning with Denoising
High levels of noise usually exist in today's captured images due to the
relatively small sensors equipped in the smartphone cameras, where the noise
brings extra challenges to lossy image compression algorithms. Without the
capacity to tell the difference between image details and noise, general image
compression methods allocate additional bits to explicitly store the undesired
image noise during compression and restore the unpleasant noisy image during
decompression. Based on the observations, we optimize the image compression
algorithm to be noise-aware as joint denoising and compression to resolve the
bits misallocation problem. The key is to transform the original noisy images
to noise-free bits by eliminating the undesired noise during compression, where
the bits are later decompressed as clean images. Specifically, we propose a
novel two-branch, weight-sharing architecture with plug-in feature denoisers to
allow a simple and effective realization of the goal with little computational
cost. Experimental results show that our method gains a significant improvement
over the existing baseline methods on both the synthetic and real-world
datasets. Our source code is available at
https://github.com/felixcheng97/DenoiseCompression.Comment: Accepted to ECCV 202
- …