262 research outputs found

    Advanced methods and deep learning for video and satellite data compression

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Deep motion‐compensation enhancement in video compression

    Get PDF
    This work introduces the multiframe motion-compensation enhancement network (MMCE-Net), a deep-learning tool aimed at improving the performance of current video coding standards based on motion-compensation, such as H.265/HEVC. The proposed method improves the inter-prediction coding efficiency by enhancing the accuracy of the motion-compensated frame and thereby improving the rate-distortion performance. MMCE-Net is a neural network that jointly exploits the predicted coding unit and two co-located coding units from previous reference frames to improve the estimation of the temporal evolution of the scene. This letter describes the architecture of MMCE-Net, how it is integrated into H.265/HEVC and the corresponding performance

    Offline and Online Optical Flow Enhancement for Deep Video Compression

    Full text link
    Video compression relies heavily on exploiting the temporal redundancy between video frames, which is usually achieved by estimating and using the motion information. The motion information is represented as optical flows in most of the existing deep video compression networks. Indeed, these networks often adopt pre-trained optical flow estimation networks for motion estimation. The optical flows, however, may be less suitable for video compression due to the following two factors. First, the optical flow estimation networks were trained to perform inter-frame prediction as accurately as possible, but the optical flows themselves may cost too many bits to encode. Second, the optical flow estimation networks were trained on synthetic data, and may not generalize well enough to real-world videos. We address the twofold limitations by enhancing the optical flows in two stages: offline and online. In the offline stage, we fine-tune a trained optical flow estimation network with the motion information provided by a traditional (non-deep) video compression scheme, e.g. H.266/VVC, as we believe the motion information of H.266/VVC achieves a better rate-distortion trade-off. In the online stage, we further optimize the latent features of the optical flows with a gradient descent-based algorithm for the video to be compressed, so as to enhance the adaptivity of the optical flows. We conduct experiments on a state-of-the-art deep video compression scheme, DCVC. Experimental results demonstrate that the proposed offline and online enhancement together achieves on average 12.8% bitrate saving on the tested videos, without increasing the model or computational complexity of the decoder side.Comment: 9 pages, 6 figure

    CANF-VC++: Enhancing Conditional Augmented Normalizing Flows for Video Compression with Advanced Techniques

    Full text link
    Video has become the predominant medium for information dissemination, driving the need for efficient video codecs. Recent advancements in learned video compression have shown promising results, surpassing traditional codecs in terms of coding efficiency. However, challenges remain in integrating fragmented techniques and incorporating new tools into existing codecs. In this paper, we comprehensively review the state-of-the-art CANF-VC codec and propose CANF-VC++, an enhanced version that addresses these challenges. We systematically explore architecture design, reference frame type, training procedure, and entropy coding efficiency, leading to substantial coding improvements. CANF-VC++ achieves significant Bj{\o}ntegaard-Delta rate savings on conventional datasets UVG, HEVC Class B and MCL-JCV, outperforming the baseline CANF-VC and even the H.266 reference software VTM. Our work demonstrates the potential of integrating advancements in video compression and serves as inspiration for future research in the field
    corecore