384 research outputs found

    Navigation domain representation for interactive multiview imaging

    Full text link
    Enabling users to interactively navigate through different viewpoints of a static scene is a new interesting functionality in 3D streaming systems. While it opens exciting perspectives towards rich multimedia applications, it requires the design of novel representations and coding techniques in order to solve the new challenges imposed by interactive navigation. Interactivity clearly brings new design constraints: the encoder is unaware of the exact decoding process, while the decoder has to reconstruct information from incomplete subsets of data since the server can generally not transmit images for all possible viewpoints due to resource constrains. In this paper, we propose a novel multiview data representation that permits to satisfy bandwidth and storage constraints in an interactive multiview streaming system. In particular, we partition the multiview navigation domain into segments, each of which is described by a reference image and some auxiliary information. The auxiliary information enables the client to recreate any viewpoint in the navigation segment via view synthesis. The decoder is then able to navigate freely in the segment without further data request to the server; it requests additional data only when it moves to a different segment. We discuss the benefits of this novel representation in interactive navigation systems and further propose a method to optimize the partitioning of the navigation domain into independent segments, under bandwidth and storage constraints. Experimental results confirm the potential of the proposed representation; namely, our system leads to similar compression performance as classical inter-view coding, while it provides the high level of flexibility that is required for interactive streaming. Hence, our new framework represents a promising solution for 3D data representation in novel interactive multimedia services

    Reliable fusion of ToF and stereo depth driven by confidence measures

    Get PDF
    In this paper we propose a framework for the fusion of depth data produced by a Time-of-Flight (ToF) camera and stereo vision system. Initially, depth data acquired by the ToF camera are upsampled by an ad-hoc algorithm based on image segmentation and bilateral filtering. In parallel a dense disparity map is obtained using the Semi- Global Matching stereo algorithm. Reliable confidence measures are extracted for both the ToF and stereo depth data. In particular, ToF confidence also accounts for the mixed-pixel effect and the stereo confidence accounts for the relationship between the pointwise matching costs and the cost obtained by the semi-global optimization. Finally, the two depth maps are synergically fused by enforcing the local consistency of depth data accounting for the confidence of the two data sources at each location. Experimental results clearly show that the proposed method produces accurate high resolution depth maps and outperforms the compared fusion algorithms

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames

    Representation and coding of 3D video data

    Get PDF
    Livrable D4.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D4.1 du projet

    Overview of 3D Video: Coding Algorithms, Implementations and Standardization

    Get PDF
    Projecte final de carrera fet en col.laboració amb Linköping Institute of TechnologyEnglish: 3D technologies have aroused a great interest over the world in the last years. Television, cinema and videogames are introducing, little by little, 3D technologies into the mass market. This comes as a result of the research done in the 3D field, solving many of its limitations such as quality, contents creation or 3D displays. This thesis focus on 3D video, considering concepts that concerns the coding issues and the video formats. The aim is to provide an overview of the current state of 3D video, including the standardization and some interesting implementations and alternatives that exist. In the report necessary background information is presented in order to understand the concepts developed: compression techniques, the different video formats, their standardization and some advances or alternatives to the processes previously explained. Finally, a comparison between the different concepts is presented to complete the overview, ending with some conclusions and proposed ideas for future works.Castellano: Las tecnologías 3D han despertado un gran interés en todo el mundo en los últimos años. Televisión, cine y videojuegos están introduciendo, poco a poco, ésta tecnología en el mercado. Esto es resultado de la investigación realizada en el campo de las 3D, solucionando muchas de sus limitaciones, como la calidad, la creación de contenidos o las pantallas 3D. Este proyecto se centra en el video 3D, considerando los conceptos relacionados con la codificación y los formatos de vídeo. El objetivo es proporcionar una visión del estado actual del vídeo 3D, incluyendo los estándares y algunas de las implementaciones más interesantes que existen. En la memoria, se presenta información adicional para facilitar el seguimiento de los conceptos desarrollados: técnicas de compresión, formatos de vídeo, su estandarización y algunos avances o alternativas a los procesos explicados. Finalmente, se presentan diferentes comparaciones entre los conceptos tratados, acabando el documento con las conclusiones obtenidas e ideas propuestas para futuros trabajos.Català: Les tecnologies 3D han despertat un gran interès a tot el món en els últims anys. Televisió, cinema i videojocs estan introduint, lentament, aquesta tecnologia en el mercat. Això és resultat de la investigació portada a terme en el camp de les 3D, solucionant moltes de les seves limitacions, com la qualitat, la creació de continguts o les pantalles 3D. Aquest proyecte es centra en el video 3D, considerant els conceptes relacionats amb la codificació i els formats de video. L'objectiu és proporcionar una visió de l'estat actual del video 3D, incloent-hi els estandàrds i algunes de les implementacions més interessants que existeixen. A la memòria, es presenta informació adicional per facilitar el seguiment dels conceptes desenvolupats: tècniques de compressió, formats de video, la seva estandardització i alguns avenços o alternatives als procesos explicats. Finalment, es presenten diferents comparacions entre els conceptes tractats i les conclusions obtingudes, juntament amb propostes per a futurs treballs

    Depth image based rendering with inverse mapping

    Get PDF

    Mixed-Resolution HEVC based multiview video codec for low bitrate transmission

    Get PDF
    • …
    corecore