250 research outputs found

    An MDC-based video streaming architecture for mobile networks

    Full text link

    MASCOT : metadata for advanced scalable video coding tools : final report

    Get PDF
    The goal of the MASCOT project was to develop new video coding schemes and tools that provide both an increased coding efficiency as well as extended scalability features compared to technology that was available at the beginning of the project. Towards that goal the following tools would be used: - metadata-based coding tools; - new spatiotemporal decompositions; - new prediction schemes. Although the initial goal was to develop one single codec architecture that was able to combine all new coding tools that were foreseen when the project was formulated, it became clear that this would limit the selection of the new tools. Therefore the consortium decided to develop two codec frameworks within the project, a standard hybrid DCT-based codec and a 3D wavelet-based codec, which together are able to accommodate all tools developed during the course of the project

    Livrable D4.2 of the PERSEE project : Représentation et codage 3D - Rapport intermédiaire - Définitions des softs et architecture

    Get PDF
    51Livrable D4.2 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D4.2 du projet. Son titre : Représentation et codage 3D - Rapport intermédiaire - Définitions des softs et architectur

    Deep Learning Based Point Cloud Processing and Compression

    Get PDF
    Title from PDF of title page, viewed August 24, 2022Dissertation advisors: Zhu Li and Sejun SongVitaIncludes bibliographical references (pages 116-137)Dissertation (Ph.D)--Department of Computer Science & Electrical Engineering. University of Missouri--Kansas City, 2022A point cloud is a 3D data representation that is becoming increasingly popular. Recent significant advances in 3D sensors and capturing techniques have led to a surge in the usage of 3D point clouds in virtual reality/augmented reality (VR/AR) content creation, as well as 3D sensing for robotics, smart cities, telepresence, and automated driving applications. With an increase in point cloud applications and improved capturing technologies, we now have high-resolution point clouds with millions of points per frame. However, due to the large size of a point cloud, efficient techniques for the transmission, compression, and processing of point cloud content are still widely sought. This thesis addresses multiple issues in the transmission, compression, and processing pipeline for point cloud data. We employ a deep learning solution to process 3D dense as well as sparse point cloud data for both static as well as dynamic contents. Employing deep learning on point cloud data which is inherently sparse is a challenging task. We propose multiple deep learning-based frameworks that address each of the following problems: Point Cloud Compression Artifact Removal. V-PCC is the current state-of-the-art for dynamic point cloud compression. However, at lower bitrates, there are unpleasant artifacts introduced by V-PCC. We propose a deep learning solution for V-PCC artifact removal by leveraging the direction of projection property in V-PCC to remove quantization noise. Point Cloud Geometry Prediction. The current point cloud lossy compression and processing techniques suffer from quantization loss which results in a coarser sub-sampled representation of the point cloud. We solve the problem of points lost during voxelization by performing geometry prediction across spatial scales using deep learning architecture. Point Cloud Geometry Upsampling. Loss of details and irregularities in point cloud geometry can occur during the capturing, processing, and compression pipeline. We present a novel geometry upsampling technique, PU-Dense, which can process a diverse set of point clouds including synthetic mesh-based point clouds, real-world high-resolution point clouds, real-world indoor LiDAR scanned objects, as well as outdoor dynamically acquired LiDAR-based point clouds. Dynamic Point Cloud Interpolation. Dense photorealistic point clouds can depict real-world dynamic objects in high resolution and with a high frame rate. Frame interpolation of such dynamic point clouds would enable the distribution, processing, and compression of such content. We also propose the first point cloud interpolation framework for photorealistic dynamic point clouds. Inter-frame Compression for Dynamic Point Clouds. Efficient point cloud compression is essential for applications like virtual and mixed reality, autonomous driving, and cultural heritage. We propose a deep learning-based inter-frame encoding scheme for dynamic point cloud geometry compression. In each case, our method achieves state-of-the-art results with significant improvement to the current technologies.Introduction -- Point cloud compression artifact removal -- Point cloud geometry prediction -- PU-Dense: sparse tensor-based point cloud geometry upsampling -- Dynamic point cloud interpolation -- Inter-frame compression for dynamic point cloud geometry codin

    Robust P2P Live Streaming

    Get PDF
    Projecte fet en col.laboració amb la Fundació i2CATThe provisioning of robust real-time communication services (voice, video, etc.) or media contents through the Internet in a distributed manner is an important challenge, which will strongly influence in current and future Internet evolution. Aware of this, we are developing a project named Trilogy leaded by the i2CAT Foundation, which has as main pillar the study, development and evaluation of Peer-to-Peer (P2P) Live streaming architectures for the distribution of high-quality media contents. In this context, this work concretely covers media coding aspects and proposes the use of Multiple Description Coding (MDC) as a flexible solution for providing robust and scalable live streaming over P2P networks. This work describes current state of the art in media coding techniques and P2P streaming architectures, presents the implemented prototype as well as its simulation and validation results

    Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

    Full text link
    The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes. Current techniques that utilize neural rendering for facilitating free-view videos (FVVs) are restricted to either offline rendering or are capable of processing only brief sequences with minimal motion. In this paper, we present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time FVV rendering on long-duration dynamic scenes. ReRF explicitly models the residual information between adjacent timestamps in the spatial-temporal feature space, with a global coordinate-based tiny MLP as the feature decoder. Specifically, ReRF employs a compact motion grid along with a residual feature grid to exploit inter-frame feature similarities. We show such a strategy can handle large motions without sacrificing quality. We further present a sequential training scheme to maintain the smoothness and the sparsity of the motion/residual grids. Based on ReRF, we design a special FVV codec that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes. Extensive experiments demonstrate the effectiveness of ReRF for compactly representing dynamic radiance fields, enabling an unprecedented free-viewpoint viewing experience in speed and quality.Comment: Accepted by CVPR 2023. Project page, see https://aoliao12138.github.io/ReRF

    Scalable and efficient video coding using 3D modeling

    Get PDF
    In this document we present a 3D model-based video coding scheme for streaming static scene video in a compact way but also enabling time and spatial scalability according to network or terminal capability and providing 3D functionalities. The proposed format is based on encoding the sequence of reconstructed models using second generation wavelets, and efficiently multiplexing the resulting geometric, topological, texture and camera motion binary representations. The wavelets decomposition can be adaptive in order to fit to images and scene contents. To ensure time scalability, this representation is based on a common connectivity for all 3D models, which also allows straightforward morphing between successive models ensuring visual continuity at no additional cost. The method proves to be better than previous methods for video encoding of static scenes, even better than state-of-the-art video coders such as H264 (also known as MPEG AVC). Another application of our approach is the fast transmission and real-time visualization of virtual environments obtained by video capture, for virtual or augmented reality, free walk-through in photo-realistic 3D environments, and numerous other image-base applications. / Nous présentons dans ce document un schéma de codage vidéo basé sur des modèles 3D qui permet de compresser efficacement des vidéos de scènes statiques tout en garantissant une scalabilité temporelle et spatiale afin de s'adapter aux capacités du réseau et des terminaux. Le passage par des modèles 3D permettent d'ajouter des fonctionnalités à la vidéo. Le format proposé se base sur l'encodage d'une séquence de modèles 3D extraits à partir de la vidéo en utilisant des ondelettes de seconde génération, et en multiplexant efficacement les représentations binaires résultaants pour la géométrie, la connectivité, la texture et les positions de caméra. La décomposition par ondelettes peut être aadptative afin de s'adapter au contenu des images et de la scène. Afin d'assurer la scalabilité temporelle, cette représentation et basée sur une connectivité commune pour tous les modèles qui permet de plus uu morphing implicite entre les modèles successifs assurant une continuité visuelle. La méthode a permis d'obtenir de meilleurs résultats pour le codage de vidéos de scènes statiques que le codeur vidéo référence de l'état de l'art H264 (également connu sous le nom de MPEG/AVC). Une autre application de notre approche est la transmission rapide et la visualisation temps réel d'environnements virtuels obtenus partir de vidéos pour les réalités augmentée et virtuelle, la navigation photoréalistique dans des environnements 3D et de nombreuses autres applications basées sur les images
    corecore