1,378 research outputs found
Generative Compression
Traditional image and video compression algorithms rely on hand-crafted
encoder/decoder pairs (codecs) that lack adaptability and are agnostic to the
data being compressed. Here we describe the concept of generative compression,
the compression of data using generative models, and suggest that it is a
direction worth pursuing to produce more accurate and visually pleasing
reconstructions at much deeper compression levels for both image and video
data. We also demonstrate that generative compression is orders-of-magnitude
more resilient to bit error rates (e.g. from noisy wireless channels) than
traditional variable-length coding schemes
High-Perceptual Quality JPEG Decoding via Posterior Sampling
JPEG is arguably the most popular image coding format, achieving high
compression ratios via lossy quantization that may create visual artifacts
degradation. Numerous attempts to remove these artifacts were conceived over
the years, and common to most of these is the use of deterministic
post-processing algorithms that optimize some distortion measure (e.g., PSNR,
SSIM). In this paper we propose a different paradigm for JPEG artifact
correction: Our method is stochastic, and the objective we target is high
perceptual quality -- striving to obtain sharp, detailed and visually pleasing
reconstructed images, while being consistent with the compressed input. These
goals are achieved by training a stochastic conditional generator (conditioned
on the compressed input), accompanied by a theoretically well-founded loss
term, resulting in a sampler from the posterior distribution. Our solution
offers a diverse set of plausible and fast reconstructions for a given input
with perfect consistency. We demonstrate our scheme's unique properties and its
superiority to a variety of alternative methods on the FFHQ and ImageNet
datasets
- …