1 research outputs found
Learned Block-based Hybrid Image Compression
Recent works on learned image compression perform encoding and decoding
processes in a full-resolution manner, resulting in two problems when deployed
for practical applications. First, parallel acceleration of the autoregressive
entropy model cannot be achieved due to serial decoding. Second,
full-resolution inference often causes the out-of-memory(OOM) problem with
limited GPU resources, especially for high-resolution images. Block partition
is a good design choice to handle the above issues, but it brings about new
challenges in reducing the redundancy between blocks and eliminating block
effects. To tackle the above challenges, this paper provides a learned
block-based hybrid image compression (LBHIC) framework. Specifically, we
introduce explicit intra prediction into a learned image compression framework
to utilize the relation among adjacent blocks. Superior to context modeling by
linear weighting of neighbor pixels in traditional codecs, we propose a
contextual prediction module (CPM) to better capture long-range correlations by
utilizing the strip pooling to extract the most relevant information in
neighboring latent space, thus achieving effective information prediction.
Moreover, to alleviate blocking artifacts, we further propose a boundary-aware
postprocessing module (BPM) with the edge importance taken into account.
Extensive experiments demonstrate that the proposed LBHIC codec outperforms the
VVC, with a bit-rate conservation of 4.1%, and reduces the decoding time by
approximately 86.7% compared with that of state-of-the-art learned image
compression methods.Comment: 13 pages, 13 figures, accepted by IEEE Trans. on Circuits and Systems
for Video Technolog