17 research outputs found
Quality-Gated Convolutional LSTM for Enhancing Compressed Video
The past decade has witnessed great success in applying deep learning to
enhance the quality of compressed video. However, the existing approaches aim
at quality enhancement on a single frame, or only using fixed neighboring
frames. Thus they fail to take full advantage of the inter-frame correlation in
the video. This paper proposes the Quality-Gated Convolutional Long Short-Term
Memory (QG-ConvLSTM) network with bi-directional recurrent structure to fully
exploit the advantageous information in a large range of frames. More
importantly, due to the obvious quality fluctuation among compressed frames,
higher quality frames can provide more useful information for other frames to
enhance quality. Therefore, we propose learning the "forget" and "input" gates
in the ConvLSTM cell from quality-related features. As such, the frames with
various quality contribute to the memory in ConvLSTM with different importance,
making the information of each frame reasonably and adequately used. Finally,
the experiments validate the effectiveness of our QG-ConvLSTM approach in
advancing the state-of-the-art quality enhancement of compressed video, and the
ablation study shows that our QG-ConvLSTM approach is learnt to make a
trade-off between quality and correlation when leveraging multi-frame
information. The project page: https://github.com/ryangchn/QG-ConvLSTM.git.Comment: Accepted to IEEE International Conference on Multimedia and Expo
(ICME) 201
Deep learning-based switchable network for in-loop filtering in high efficiency video coding
The video codecs are focusing on a smart transition in this era. A future area of research that has not yet been fully investigated is the effect of deep learning on video compression. The paper’s goal is to reduce the ringing and artifacts that loop filtering causes when high-efficiency video compression is used. Even though there is a lot of research being done to lessen this effect, there are still many improvements that can be made. In This paper we have focused on an intelligent solution for improvising in-loop filtering in high efficiency video coding (HEVC) using a deep convolutional neural network (CNN). The paper proposes the design and implementation of deep CNN-based loop filtering using a series of 15 CNN networks followed by a combine and squeeze network that improves feature extraction. The resultant output is free from double enhancement and the peak signal-to-noise ratio is improved by 0.5 dB compared to existing techniques. The experiments then demonstrate that improving the coding efficiency by pipelining this network to the current network and using it for higher quantization parameters (QP) is more effective than using it separately. Coding efficiency is improved by an average of 8.3% with the switching based deep CNN in-loop filtering
Multi-Frame Quality Enhancement for Compressed Video
The past few years have witnessed great success in applying deep learning to
enhance the quality of compressed image/video. The existing approaches mainly
focus on enhancing the quality of a single frame, ignoring the similarity
between consecutive frames. In this paper, we investigate that heavy quality
fluctuation exists across compressed video frames, and thus low quality frames
can be enhanced using the neighboring high quality frames, seen as Multi-Frame
Quality Enhancement (MFQE). Accordingly, this paper proposes an MFQE approach
for compressed video, as a first attempt in this direction. In our approach, we
firstly develop a Support Vector Machine (SVM) based detector to locate Peak
Quality Frames (PQFs) in compressed video. Then, a novel Multi-Frame
Convolutional Neural Network (MF-CNN) is designed to enhance the quality of
compressed video, in which the non-PQF and its nearest two PQFs are as the
input. The MF-CNN compensates motion between the non-PQF and PQFs through the
Motion Compensation subnet (MC-subnet). Subsequently, the Quality Enhancement
subnet (QE-subnet) reduces compression artifacts of the non-PQF with the help
of its nearest PQFs. Finally, the experiments validate the effectiveness and
generality of our MFQE approach in advancing the state-of-the-art quality
enhancement of compressed video. The code of our MFQE approach is available at
https://github.com/ryangBUAA/MFQE.gitComment: to appear in CVPR 201