14,095 research outputs found
Multi-Frame Quality Enhancement for Compressed Video
The past few years have witnessed great success in applying deep learning to
enhance the quality of compressed image/video. The existing approaches mainly
focus on enhancing the quality of a single frame, ignoring the similarity
between consecutive frames. In this paper, we investigate that heavy quality
fluctuation exists across compressed video frames, and thus low quality frames
can be enhanced using the neighboring high quality frames, seen as Multi-Frame
Quality Enhancement (MFQE). Accordingly, this paper proposes an MFQE approach
for compressed video, as a first attempt in this direction. In our approach, we
firstly develop a Support Vector Machine (SVM) based detector to locate Peak
Quality Frames (PQFs) in compressed video. Then, a novel Multi-Frame
Convolutional Neural Network (MF-CNN) is designed to enhance the quality of
compressed video, in which the non-PQF and its nearest two PQFs are as the
input. The MF-CNN compensates motion between the non-PQF and PQFs through the
Motion Compensation subnet (MC-subnet). Subsequently, the Quality Enhancement
subnet (QE-subnet) reduces compression artifacts of the non-PQF with the help
of its nearest PQFs. Finally, the experiments validate the effectiveness and
generality of our MFQE approach in advancing the state-of-the-art quality
enhancement of compressed video. The code of our MFQE approach is available at
https://github.com/ryangBUAA/MFQE.gitComment: to appear in CVPR 201
Two-Stream Action Recognition-Oriented Video Super-Resolution
We study the video super-resolution (SR) problem for facilitating video
analytics tasks, e.g. action recognition, instead of for visual quality. The
popular action recognition methods based on convolutional networks, exemplified
by two-stream networks, are not directly applicable on video of low spatial
resolution. This can be remedied by performing video SR prior to recognition,
which motivates us to improve the SR procedure for recognition accuracy.
Tailored for two-stream action recognition networks, we propose two video SR
methods for the spatial and temporal streams respectively. On the one hand, we
observe that regions with action are more important to recognition, and we
propose an optical-flow guided weighted mean-squared-error loss for our
spatial-oriented SR (SoSR) network to emphasize the reconstruction of moving
objects. On the other hand, we observe that existing video SR methods incur
temporal discontinuity between frames, which also worsens the recognition
accuracy, and we propose a siamese network for our temporal-oriented SR (ToSR)
training that emphasizes the temporal continuity between consecutive frames. We
perform experiments using two state-of-the-art action recognition networks and
two well-known datasets--UCF101 and HMDB51. Results demonstrate the
effectiveness of our proposed SoSR and ToSR in improving recognition accuracy.Comment: Accepted to ICCV 2019. Code:
https://github.com/AlanZhang1995/TwoStreamS
Depth Superresolution using Motion Adaptive Regularization
Spatial resolution of depth sensors is often significantly lower compared to
that of conventional optical cameras. Recent work has explored the idea of
improving the resolution of depth using higher resolution intensity as a side
information. In this paper, we demonstrate that further incorporating temporal
information in videos can significantly improve the results. In particular, we
propose a novel approach that improves depth resolution, exploiting the
space-time redundancy in the depth and intensity using motion-adaptive low-rank
regularization. Experiments confirm that the proposed approach substantially
improves the quality of the estimated high-resolution depth. Our approach can
be a first component in systems using vision techniques that rely on high
resolution depth information
- …