35 research outputs found

    A Temporal Dependency Model for Rate-Distortion Optimization in Video Coding

    Get PDF
    Many video codecs use motion compensated prediction to achieve compression efficiency. Motion compensated prediction may produce temporal dependency across frames. For example, quantization distortion in a block may propagate through motion compensated prediction and affect the coding efficiency of blocks in subsequent frames. Identifying temporal dependencies may improve the rate-distortion optimization and produce coding performance gains. Block-based motion trajectories and correlations between source pixel blocks along a motion trajectory may be used estimate a distortion propagation model, which may represent the correlation between the distortion propagation and the effect of quantization. A temporal dependency model that accounts for both block correlation and the quantization effect may provide compression gains over the use of a distortion propagation model

    Learned Variable-Rate Image Compression with Residual Divisive Normalization

    Full text link
    Recently it has been shown that deep learning-based image compression has shown the potential to outperform traditional codecs. However, most existing methods train multiple networks for multiple bit rates, which increases the implementation complexity. In this paper, we propose a variable-rate image compression framework, which employs more Generalized Divisive Normalization (GDN) layers than previous GDN-based methods. Novel GDN-based residual sub-networks are also developed in the encoder and decoder networks. Our scheme also uses a stochastic rounding-based scalable quantization. To further improve the performance, we encode the residual between the input and the reconstructed image from the decoder network as an enhancement layer. To enable a single model to operate with different bit rates and to learn multi-rate image features, a new objective function is introduced. Experimental results show that the proposed framework trained with variable-rate objective function outperforms all standard codecs such as H.265/HEVC-based BPG and state-of-the-art learning-based variable-rate methods.Comment: 6 pages, 5 figure

    Luma/Chroma Component Wise Weighted Pixel Inter Prediction

    Get PDF
    After obtaining a prediction for a current block using inter-prediction, block adaptive local weighted prediction (BAWP) can be used to adjust the prediction based on luminance (i.e., brightness) and/or chroma (i.e., color) differences between the current block and its reference block therewith improving the prediction quality and reducing the differences, which in turn can reduce the prediction residuals that are encoded. Techniques that reduce the signaling costs associated with signaling BAWP for the luminance and chrominance components are described

    Fast and High-Performance Learned Image Compression With Improved Checkerboard Context Model, Deformable Residual Module, and Knowledge Distillation

    Full text link
    Deep learning-based image compression has made great progresses recently. However, many leading schemes use serial context-adaptive entropy model to improve the rate-distortion (R-D) performance, which is very slow. In addition, the complexities of the encoding and decoding networks are quite high and not suitable for many practical applications. In this paper, we introduce four techniques to balance the trade-off between the complexity and performance. We are the first to introduce deformable convolutional module in compression framework, which can remove more redundancies in the input image, thereby enhancing compression performance. Second, we design a checkerboard context model with two separate distribution parameter estimation networks and different probability models, which enables parallel decoding without sacrificing the performance compared to the sequential context-adaptive model. Third, we develop an improved three-step knowledge distillation and training scheme to achieve different trade-offs between the complexity and the performance of the decoder network, which transfers both the final and intermediate results of the teacher network to the student network to help its training. Fourth, we introduce L1L_{1} regularization to make the numerical values of the latent representation more sparse. Then we only encode non-zero channels in the encoding and decoding process, which can greatly reduce the encoding and decoding time. Experiments show that compared to the state-of-the-art learned image coding scheme, our method can be about 20 times faster in encoding and 70-90 times faster in decoding, and our R-D performance is also 2.3%2.3 \% higher. Our method outperforms the traditional approach in H.266/VVC-intra (4:4:4) and some leading learned schemes in terms of PSNR and MS-SSIM metrics when testing on Kodak and Tecnick-40 datasets.Comment: Submitted to Trans. Journa
    corecore