260 research outputs found

    Overview of MV-HEVC prediction structures for light field video

    Get PDF
    Light field video is a promising technology for delivering the required six-degrees-of-freedom for natural content in virtual reality. Already existing multi-view coding (MVC) and multi-view plus depth (MVD) formats, such as MV-HEVC and 3D-HEVC, are the most conventional light field video coding solutions since they can compress video sequences captured simultaneously from multiple camera angles. 3D-HEVC treats a single view as a video sequence and the other sub-aperture views as gray-scale disparity (depth) maps. On the other hand, MV-HEVC treats each view as a separate video sequence, which allows the use of motion compensated algorithms similar to HEVC. While MV-HEVC and 3D-HEVC provide similar results, MV-HEVC does not require any disparity maps to be readily available, and it has a more straightforward implementation since it only uses syntax elements rather than additional prediction tools for inter-view prediction. However, there are many degrees of freedom in choosing an appropriate structure and it is currently still unknown which one is optimal for a given set of application requirements. In this work, various prediction structures for MV-HEVC are implemented and tested. The findings reveal the trade-off between compression gains, distortion and random access capabilities in MVHEVC light field video coding. The results give an overview of the most optimal solutions developed in the context of this work, and prediction structure algorithms proposed in state-of-the-art literature. This overview provides a useful benchmark for future development of light field video coding solutions

    Depth map compression via 3D region-based representation

    Get PDF
    In 3D video, view synthesis is used to create new virtual views between encoded camera views. Errors in the coding of the depth maps introduce geometry inconsistencies in synthesized views. In this paper, a new 3D plane representation of the scene is presented which improves the performance of current standard video codecs in the view synthesis domain. Two image segmentation algorithms are proposed for generating a color and depth segmentation. Using both partitions, depth maps are segmented into regions without sharp discontinuities without having to explicitly signal all depth edges. The resulting regions are represented using a planar model in the 3D world scene. This 3D representation allows an efficient encoding while preserving the 3D characteristics of the scene. The 3D planes open up the possibility to code multiview images with a unique representation.Postprint (author's final draft

    Random access prediction structures for light field video coding with MV-HEVC

    Get PDF
    Computational imaging and light field technology promise to deliver the required six-degrees-of-freedom for natural scenes in virtual reality. Already existing extensions of standardized video coding formats, such as multi-view coding and multi-view plus depth, are the most conventional light field video coding solutions at the moment. The latest multi-view coding format, which is a direct extension of the high efficiency video coding (HEVC) standard, is called multi-view HEVC (or MV-HEVC). MV-HEVC treats each light field view as a separate video sequence, and uses syntax elements similar to standard HEVC for exploiting redundancies between neighboring views. To achieve this, inter-view and temporal prediction schemes are deployed with the aim to find the most optimal trade-off between coding performance and reconstruction quality. The number of possible prediction structures is unlimited and many of them are proposed in the literature. Although some of them are efficient in terms of compression ratio, they complicate random access due to the dependencies on previously decoded pixels or frames. Random access is an important feature in video delivery, and a crucial requirement in multi-view video coding. In this work, we propose and compare different prediction structures for coding light field video using MV-HEVC with a focus on both compression efficiency and random accessibility. Experiments on three different short-baseline light field video sequences show the trade-off between bit-rate and distortion, as well as the average number of decoded views/frames, necessary for displaying any random frame at any time instance. The findings of this work indicate the most appropriate prediction structure depending on the available bandwidth and the required degree of random access

    FVV Live: A real-time free-viewpoint video system with consumer electronics hardware

    Full text link
    FVV Live is a novel end-to-end free-viewpoint video system, designed for low cost and real-time operation, based on off-the-shelf components. The system has been designed to yield high-quality free-viewpoint video using consumer-grade cameras and hardware, which enables low deployment costs and easy installation for immersive event-broadcasting or videoconferencing. The paper describes the architecture of the system, including acquisition and encoding of multiview plus depth data in several capture servers and virtual view synthesis on an edge server. All the blocks of the system have been designed to overcome the limitations imposed by hardware and network, which impact directly on the accuracy of depth data and thus on the quality of virtual view synthesis. The design of FVV Live allows for an arbitrary number of cameras and capture servers, and the results presented in this paper correspond to an implementation with nine stereo-based depth cameras. FVV Live presents low motion-to-photon and end-to-end delays, which enables seamless free-viewpoint navigation and bilateral immersive communications. Moreover, the visual quality of FVV Live has been assessed through subjective assessment with satisfactory results, and additional comparative tests show that it is preferred over state-of-the-art DIBR alternatives

    Depth-based Multi-View 3D Video Coding

    Get PDF

    Overview of 3D Video: Coding Algorithms, Implementations and Standardization

    Get PDF
    Projecte final de carrera fet en col.laboració amb Linköping Institute of TechnologyEnglish: 3D technologies have aroused a great interest over the world in the last years. Television, cinema and videogames are introducing, little by little, 3D technologies into the mass market. This comes as a result of the research done in the 3D field, solving many of its limitations such as quality, contents creation or 3D displays. This thesis focus on 3D video, considering concepts that concerns the coding issues and the video formats. The aim is to provide an overview of the current state of 3D video, including the standardization and some interesting implementations and alternatives that exist. In the report necessary background information is presented in order to understand the concepts developed: compression techniques, the different video formats, their standardization and some advances or alternatives to the processes previously explained. Finally, a comparison between the different concepts is presented to complete the overview, ending with some conclusions and proposed ideas for future works.Castellano: Las tecnologías 3D han despertado un gran interés en todo el mundo en los últimos años. Televisión, cine y videojuegos están introduciendo, poco a poco, ésta tecnología en el mercado. Esto es resultado de la investigación realizada en el campo de las 3D, solucionando muchas de sus limitaciones, como la calidad, la creación de contenidos o las pantallas 3D. Este proyecto se centra en el video 3D, considerando los conceptos relacionados con la codificación y los formatos de vídeo. El objetivo es proporcionar una visión del estado actual del vídeo 3D, incluyendo los estándares y algunas de las implementaciones más interesantes que existen. En la memoria, se presenta información adicional para facilitar el seguimiento de los conceptos desarrollados: técnicas de compresión, formatos de vídeo, su estandarización y algunos avances o alternativas a los procesos explicados. Finalmente, se presentan diferentes comparaciones entre los conceptos tratados, acabando el documento con las conclusiones obtenidas e ideas propuestas para futuros trabajos.Català: Les tecnologies 3D han despertat un gran interès a tot el món en els últims anys. Televisió, cinema i videojocs estan introduint, lentament, aquesta tecnologia en el mercat. Això és resultat de la investigació portada a terme en el camp de les 3D, solucionant moltes de les seves limitacions, com la qualitat, la creació de continguts o les pantalles 3D. Aquest proyecte es centra en el video 3D, considerant els conceptes relacionats amb la codificació i els formats de video. L'objectiu és proporcionar una visió de l'estat actual del video 3D, incloent-hi els estandàrds i algunes de les implementacions més interessants que existeixen. A la memòria, es presenta informació adicional per facilitar el seguiment dels conceptes desenvolupats: tècniques de compressió, formats de video, la seva estandardització i alguns avenços o alternatives als procesos explicats. Finalment, es presenten diferents comparacions entre els conceptes tractats i les conclusions obtingudes, juntament amb propostes per a futurs treballs

    3D video coding and transmission

    Get PDF
    The capture, transmission, and display of 3D content has gained a lot of attention in the last few years. 3D multimedia content is no longer con fined to cinema theatres but is being transmitted using stereoscopic video over satellite, shared on Blu-RayTMdisks, or sent over Internet technologies. Stereoscopic displays are needed at the receiving end and the viewer needs to wear special glasses to present the two versions of the video to the human vision system that then generates the 3D illusion. To be more e ffective and improve the immersive experience, more views are acquired from a larger number of cameras and presented on di fferent displays, such as autostereoscopic and light field displays. These multiple views, combined with depth data, also allow enhanced user experiences and new forms of interaction with the 3D content from virtual viewpoints. This type of audiovisual information is represented by a huge amount of data that needs to be compressed and transmitted over bandwidth-limited channels. Part of the COST Action IC1105 \3D Content Creation, Coding and Transmission over Future Media Networks" (3DConTourNet) focuses on this research challenge.peer-reviewe

    3D coding tools final report

    Get PDF
    Livrable D4.3 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D4.3 du projet. Son titre : 3D coding tools final repor

    Dünaamiline kiiruse jaotamine interaktiivses mitmevaatelises video vaatevahetuse ennustamineses

    Get PDF
    In Interactive Multi-View Video (IMVV), the video has been captured by numbers of cameras positioned in array and transmitted those camera views to users. The user can interact with the transmitted video content by choosing viewpoints (views from different cameras in the array) with the expectation of minimum transmission delay while changing among various views. View switching delay is one of the primary concern that is dealt in this thesis work, where the contribution is to minimize the transmission delay of new view switch frame through a novel process of selection of the predicted view and compression considering the transmission efficiency. Mainly considered a realtime IMVV streaming, and the view switch is mapped as discrete Markov chain, where the transition probability is derived using Zipf distribution, which provides information regarding view switch prediction. To eliminate Round-Trip Time (RTT) transmission delay, Quantization Parameters (QP) are adaptively allocated to the remaining redundant transmitted frames to maintain view switching time minimum, trading off with the quality of the video till RTT time-span. The experimental results of the proposed method show superior performance on PSNR and view switching delay for better viewing quality over the existing methods
    corecore