8 research outputs found

    Low cost multi-view video system for wireless channel

    Get PDF
    With the advent in display technology, the 3DTV will provide a new viewing experience without the need of wearing special glasses to watch the 3D scenes. One of the key elements in 3DTV is the multi-view video coding, obtained from a set of synchronized cameras, capture the same scene from different view points. The video streams are synchronized and subsequently used to exploit the redundancy contained among video sources. A multi-view video consists of components for data acquisition, compression, transmission and display. This paper outlines the design and implementation of a multi-view video system for transmission over a wireless channel. Synchronized video sequences acquired from four separate cameras and coded with H.264/AVC. The video data is then transmitted over a simulated Rayleigh channel through digital video broadcasting -terrestrial (DVB-T) system with orthogonal frequency division multiplexing (OFDM)

    Multi-view Video Coding for Wireless Channel

    Get PDF
    In this paper, a multi-view video system for wireless applications will be presented. The system consists of components for data acquisition, compression, transmission and display. The main feature of the system includes wireless video transmission system for up to four cameras, by which videos can be acquired, encoded and transmitted wirelessly to a receiving station. The video streams can be displayed on a single 3D or on multiple 2D displays. The encoding for the multi-view video through inter-view and temporal redundancies increased the compression rates. The H.264/AVC multi-view compression techniques has been exploited and tested during the implementation process. The video data is then transmitted over a simulated Rayleigh channel through Digital Video Broadcasting – Terrestrial (DVB-T) system with Orthogonal Frequency Division Multiplexing (OFDM). One of the highlights in this paper is the low cost implementation of a multi-view video system, which using only typical web cameras attached to a single PC

    3D video compression based on high efficiency video coding

    Get PDF
    With the advent of autostereoscopic displays, questions rise on how to efficiently compress the video information needed by such displays. Additionally, for gradual market acceptance of this new technology it is valuable to have a solution offering forward compatibility with stereo 3D video as it is used nowadays. In this paper, a multiview compression scheme making use of the efficient single-view coding tools used in High Efficiency Video Coding (HEVC) is provided. Although efficient single view compression can be obtained with HEVC, a multiview adaptation of this standard under development is proposed, offering additional coding gains. On average, for the texture information, the total bitrate can be reduced by 37.2% compared to simulcast HEVC. For depth map compression, gains largely depend on the quality of the captured content. Additionally, a forward compatible solution is proposed offering the possibility for a gradual upgrade from H.264/AVC based stereoscopic 3D systems to an HEVC-based autostereoscopic environment. With the proposed system, significant rate savings compared to Multiview Video Coding (MVC) are presented(1)

    Coding and replication co-design for interactive multiview video streaming

    Full text link

    Balanced Distributed Coding of Omnidirectional Images

    Get PDF
    This paper presents a distributed coding scheme for the representation of 3D scenes captured by a network of omnidirectional cameras. We consider a scenario where images captured at different viewpoints are encoded independently, with a balanced rate distribution among the different cameras. The distributed coding is built on multiresolution representation and partitioning of the visual information in each camera. The encoder then transmits one partition after entropy coding, as well as the syndrome bits resulting from the channel encoding of the other partition. The joint decoder exploits the intra-view correlation by predicting the missing source information with help of the syndrome bits. At the same time, it exploits the inter-view correlation by using motion estimation between images from different cameras. Experiments demonstrate that the distributed coding solution performs better than a scheme where images are handled independently, while the coding rate advantageously stays balanced between encoders
    corecore