23 research outputs found

    Fidelity-Controllable Extreme Image Compression with Generative Adversarial Networks

    Full text link
    We propose a GAN-based image compression method working at extremely low bitrates below 0.1bpp. Most existing learned image compression methods suffer from blur at extremely low bitrates. Although GAN can help to reconstruct sharp images, there are two drawbacks. First, GAN makes training unstable. Second, the reconstructions often contain unpleasing noise or artifacts. To address both of the drawbacks, our method adopts two-stage training and network interpolation. The two-stage training is effective to stabilize the training. Moreover, the network interpolation utilizes the models in both stages and reduces undesirable noise and artifacts, while maintaining important edges. Hence, we can control the trade-off between perceptual quality and fidelity without re-training models. The experimental results show that our model can reconstruct high quality images. Furthermore, our user study confirms that our reconstructions are preferable to state-of-the-art GAN-based image compression model. The code will be available.Comment: 8 pages, 11 figure

    Practical lossless compression with latent variables using bits back coding

    Get PDF
    Deep latent variable models have seen recent success in many data domains. Lossless compression is an application of these models which, despite having the potential to be highly useful, has yet to be implemented in a practical manner. We present 'Bits Back with ANS' (BB-ANS), a scheme to perform lossless compression with latent variable models at a near optimal rate. We demonstrate this scheme by using it to compress the MNIST dataset with a variational auto-encoder model (VAE), achieving compression rates superior to standard methods with only a simple VAE. Given that the scheme is highly amenable to parallelization, we conclude that with a sufficiently high quality generative model this scheme could be used to achieve substantial improvements in compression rate with acceptable running time. We make our implementation available open source at https://github.com/bits-back/bits-back

    Practical Lossless Compression with Latent Variables using Bits Back Coding

    Get PDF
    Deep latent variable models have seen recent success in many data domains. Lossless compression is an application of these models which, despite having the potential to be highly useful, has yet to be implemented in a practical manner. We present `Bits Back with ANS' (BB-ANS), a scheme to perform lossless compression with latent variable models at a near optimal rate. We demonstrate this scheme by using it to compress the MNIST dataset with a variational auto-encoder model (VAE), achieving compression rates superior to standard methods with only a simple VAE. Given that the scheme is highly amenable to parallelization, we conclude that with a sufficiently high quality generative model this scheme could be used to achieve substantial improvements in compression rate with acceptable running time. We make our implementation available open source at https://github.com/bits-back/bits-back

    Lossy Image Compression with Conditional Diffusion Models

    Full text link
    Denoising diffusion models have recently marked a milestone in high-quality image generation. One may thus wonder if they are suitable for neural image compression. This paper outlines an end-to-end optimized image compression framework based on a conditional diffusion model, drawing on the transform-coding paradigm. Besides the latent variables inherent to the diffusion process, this paper introduces an additional discrete "content" latent variable to condition the denoising process on. This variable is equipped with a hierarchical prior for entropy coding. The remaining "texture" latent variables characterizing the diffusion process are synthesized (either stochastically or deterministically) at decoding time. We furthermore show that the performance can be tuned toward perceptual metrics of interest. Our extensive experiments involving five datasets and 16 image perceptual quality assessment metrics show that our approach not only compares favorably in terms of rate and perceptual distortion tradeoffs but also shows robust performance under all metrics while other baselines show less consistent behavior.Comment: Accepted at the ECCV 2022 Workshop on Uncertainty Quantification for Computer Visio

    A Deep Wavelet AutoEncoder Scheme for Image Compression

    Get PDF
    For many years and since its appearance, Digital Wavelet Transform DWT has been used with great success in a wide range of applications especially in image compression and signal de-noising. Combined with several and various approaches, this powerful mathematical tool has shown its strength to compress images with high compression ratio and good visual quality. This paper attempts to demonstrate that it is needless to follow the classical three stages process of compression: pixels transformation, quantization and binary coding when compressing images using the baseline method. Indeed, in this work, we propose a new scheme of image compression system based on an unsupervised convolutional neural network AutoEncoder (CAE) that will reconstruct the approximate sub-band issue from image decomposition by the wavelet transform DWT. In order To evaluate the model’s performance we use Kodak dataset containing a set of 24 images never compressed with a lossy algorithm technique and applied the approach on every one of them. We compared our achieved results with those obtained using standard compression method. We draw this comparison in terms of four performance parameters: Structural Similarity Index Metrix SSIM, Peak Signal to Noise Ratio PSNR, Mean Square Error MSE and Compression Ratio CR. The proposed scheme offers significate improvement in distortion metrics over the traditional image compression method when evaluated for perceptual quality moreover it produces better visual quality images with clearer details and textures which demonstrates its effectiveness and its robustness
    corecore