437 research outputs found
Variable Rate Deep Image Compression with Modulated Autoencoder
Variable rate is a requirement for flexible and adaptable image and video
compression. However, deep image compression methods are optimized for a single
fixed rate-distortion tradeoff. While this can be addressed by training
multiple models for different tradeoffs, the memory requirements increase
proportionally to the number of models. Scaling the bottleneck representation
of a shared autoencoder can provide variable rate compression with a single
shared autoencoder. However, the R-D performance using this simple mechanism
degrades in low bitrates, and also shrinks the effective range of bit rates.
Addressing these limitations, we formulate the problem of variable
rate-distortion optimization for deep image compression, and propose modulated
autoencoders (MAEs), where the representations of a shared autoencoder are
adapted to the specific rate-distortion tradeoff via a modulation network.
Jointly training this modulated autoencoder and modulation network provides an
effective way to navigate the R-D operational curve. Our experiments show that
the proposed method can achieve almost the same R-D performance of independent
models with significantly fewer parameters.Comment: Published as a journal paper in IEEE Signal Processing Letter
G-VAE: A Continuously Variable Rate Deep Image Compression Framework
Rate adaption of deep image compression in a single model will become one of
the decisive factors competing with the classical image compression codecs.
However, until now, there is no perfect solution that neither increases the
computation nor affects the compression performance. In this paper, we propose
a novel image compression framework G-VAE (Gained Variational Autoencoder),
which could achieve continuously variable rate in a single model. Unlike the
previous solutions that encode progressively or change the internal unit of the
network, G-VAE only adds a pair of gain units at the output of encoder and the
input of decoder. It is so concise that G-VAE could be applied to almost all
the image compression methods and achieve continuously variable rate with
negligible additional parameters and computation. We also propose a new deep
image compression framework, which outperforms all the published results on
Kodak datasets in PSNR and MS-SSIM metrics. Experimental results show that
adding a pair of gain units will not affect the performance of the basic models
while endowing them with continuously variable rate
QVRF: A Quantization-error-aware Variable Rate Framework for Learned Image Compression
Learned image compression has exhibited promising compression performance,
but variable bitrates over a wide range remain a challenge. State-of-the-art
variable rate methods compromise the loss of model performance and require
numerous additional parameters. In this paper, we present a
Quantization-error-aware Variable Rate Framework (QVRF) that utilizes a
univariate quantization regulator a to achieve wide-range variable rates within
a single model. Specifically, QVRF defines a quantization regulator vector
coupled with predefined Lagrange multipliers to control quantization error of
all latent representation for discrete variable rates. Additionally, the
reparameterization method makes QVRF compatible with a round quantizer.
Exhaustive experiments demonstrate that existing fixed-rate VAE-based methods
equipped with QVRF can achieve wide-range continuous variable rates within a
single model without significant performance degradation. Furthermore, QVRF
outperforms contemporary variable-rate methods in rate-distortion performance
with minimal additional parameters.Comment: 7 pages, 6 figure
Image storage on synthetic DNA using compressive autoencoders and DNA-adapted entropy coders
Over the past years, the ever-growing trend on data storage demand, more
specifically for "cold" data (rarely accessed data), has motivated research for
alternative systems of data storage. Because of its biochemical
characteristics, synthetic DNA molecules are now considered as serious
candidates for this new kind of storage. This paper presents some results on
lossy image compression methods based on convolutional autoencoders adapted to
DNA data storage, with synthetic DNA-adapted entropic and fixed-length codes.
The model architectures presented here have been designed to efficiently
compress images, encode them into a quaternary code, and finally store them
into synthetic DNA molecules. This work also aims at making the compression
models better fit the problematics that we encounter when storing data into
DNA, namely the fact that the DNA writing, storing and reading methods are
error prone processes. The main take aways of this kind of compressive
autoencoder are our latent space quantization and the different DNA adapted
entropy coders used to encode the quantized latent space, which are an
improvement over the fixed length DNA adapted coders that were previously used.Comment: arXiv admin note: substantial text overlap with arXiv:2203.0998
A Deep Wavelet AutoEncoder Scheme for Image Compression
For many years and since its appearance, Digital Wavelet Transform DWT has been used with great success in a wide range of applications especially in image compression and signal de-noising. Combined with several and various approaches, this powerful mathematical tool has shown its strength to compress images with high compression ratio and good visual quality. This paper attempts to demonstrate that it is needless to follow the classical three stages process of compression: pixels transformation, quantization and binary coding when compressing images using the baseline method. Indeed, in this work, we propose a new scheme of image compression system based on an unsupervised convolutional neural network AutoEncoder (CAE) that will reconstruct the approximate sub-band issue from image decomposition by the wavelet transform DWT. In order To evaluate the model’s performance we use Kodak dataset containing a set of 24 images never compressed with a lossy algorithm technique and applied the approach on every one of them. We compared our achieved results with those obtained using standard compression method. We draw this comparison in terms of four performance parameters: Structural Similarity Index Metrix SSIM, Peak Signal to Noise Ratio PSNR, Mean Square Error MSE and Compression Ratio CR. The proposed scheme offers significate improvement in distortion metrics over the traditional image compression method when evaluated for perceptual quality moreover it produces better visual quality images with clearer details and textures which demonstrates its effectiveness and its robustness
Substitutional neural image compression
First author draf
- …