123 research outputs found

    Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications

    Get PDF
    Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications

    Image Completion for View Synthesis Using Markov Random Fields and Efficient Belief Propagation

    Full text link
    View synthesis is a process for generating novel views from a scene which has been recorded with a 3-D camera setup. It has important applications in 3-D post-production and 2-D to 3-D conversion. However, a central problem in the generation of novel views lies in the handling of disocclusions. Background content, which was occluded in the original view, may become unveiled in the synthesized view. This leads to missing information in the generated view which has to be filled in a visually plausible manner. We present an inpainting algorithm for disocclusion filling in synthesized views based on Markov random fields and efficient belief propagation. We compare the result to two state-of-the-art algorithms and demonstrate a significant improvement in image quality.Comment: Published version: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=673843

    Quality-aware adaptive delivery of multi-view video

    Get PDF
    Advances in video coding and networking technologies have paved the way for the Multi-View Video (MVV) streaming. However, large amounts of data and dynamic network conditions result in frequent network congestion, which may prevent video packets from being delivered on time. As a consequence, the 3D viewing experience may be degraded signifi- cantly, unless quality-aware adaptation methods are deployed. There is no research work to discuss the MVV adaptation of decision strategy or provide a detailed analysis of a dynamic network environment. This work addresses the mentioned issues for MVV streaming over HTTP for emerging multi-view displays. In this research work, the effect of various adaptations of decision strategies are evaluated and, as a result, a new quality-aware adaptation method is designed. The proposed method is benefiting from layer based video coding in such a way that high Quality of Experience (QoE) is maintained in a cost-effective manner. The conducted experimental results on MVV streaming using the proposed strategy are showing that the perceptual 3D video quality, under adverse network conditions, is enhanced significantly as a result of the proposed quality-aware adaptation

    Omnidirectional view and multi-modal streaming in 3D tele-immersion system

    Get PDF
    3D Tele-immersion (3DTI) technology allows full-body, multi-modal content delivery among geographically dispersed users. In 3DTI, user’s 3D model will be captured by multiple RGB-D (color plus depth) cameras surround- ing user’s body. In addition, various sensors (e.g., motion sensors, medical sensors, wearable gaming consoles, etc.) specified by the application will be included to deliver a multi-modal experience. In a traditional 2D live video streaming system, the interactivity of end users, choosing a specified viewpoint, has been crippled by the fact that they can only choose to see the physical scene captured by a physical camera, but not between two physical cameras. However, 3DTI system makes it possible rendering a 3D space where the viewers can view physical scene from arbitrary viewpoint. In this thesis, we present systematic solutions of omnidirectional view in 3D tele-immersion system in a real-time manner and in an on-demand streaming manner, called FreeViewer and OmniViewer, respectively. we provide a complete multi-modal 3D video streaming/rendering solution, which achieves the feature of omnidirectional view in monoscopic 3D systems

    Reduced reference image and video quality assessments: review of methods

    Get PDF
    With the growing demand for image and video-based applications, the requirements of consistent quality assessment metrics of image and video have increased. Different approaches have been proposed in the literature to estimate the perceptual quality of images and videos. These approaches can be divided into three main categories; full reference (FR), reduced reference (RR) and no-reference (NR). In RR methods, instead of providing the original image or video as a reference, we need to provide certain features (i.e., texture, edges, etc.) of the original image or video for quality assessment. During the last decade, RR-based quality assessment has been a popular research area for a variety of applications such as social media, online games, and video streaming. In this paper, we present review and classification of the latest research work on RR-based image and video quality assessment. We have also summarized different databases used in the field of 2D and 3D image and video quality assessment. This paper would be helpful for specialists and researchers to stay well-informed about recent progress of RR-based image and video quality assessment. The review and classification presented in this paper will also be useful to gain understanding of multimedia quality assessment and state-of-the-art approaches used for the analysis. In addition, it will help the reader select appropriate quality assessment methods and parameters for their respective applications

    Towards an LTE hybrid unicast broadcast content delivery framework

    Get PDF
    The era of ubiquitous access to a rich selection of interactive and high quality multimedia has begun; with it, significant challenges in data demand have been placed on mobile network technologies. Content creators and broadcasters alike have embraced the additional capabilities offered by network delivery; diversifying content offerings and providing viewers with far greater choice. Mobile broadcast services introduced as part of the Long Term Evolution (LTE) standard, that are to be further enhanced with the release of 5G, do aid in spectrally efficient delivery of popular live multimedia to many mobile devices, but, ultimately rely on all users expressing interest in the same single stream. The research presented herein explores the development of a standards aligned, multi-stream aware framework; allowing mobile network operators the efficiency gains of broadcast whilst continuing to offer personalised experiences to subscribers. An open source, system level simulation platform is extended to support broadcast, characterised and validated. This is followed by the implementation of a Hybrid Unicast Broadcast Synchronisation (HUBS) framework able to dynamically vary broadcast resource allocation. The HUBS framework is then further expanded to make use of scalable video content

    Automatic 2D-to-3D video conversion technique based on depth-from-motion and color segmentation

    Full text link
    Most of the TV manufacturers have released 3DTVs in the summer of 2010 using shutter-glasses technology. 3D video applications are becoming popular in our daily life, especially at home entertainment. Although more and more 3D movies are being made, 3D video contents are still not rich enough to satisfy the future 3D video market. There is a rising demand on new techniques for automatically converting 2D video content to stereoscopic 3D video displays. In this paper, an automatic monoscopic video to stereoscopic 3D video conversion scheme is presented using block-based depth from motion estimation and color segmentation for depth map enhancement. The color based region segmentation provides good region boundary information, which is used to fuse with block-based depth map for eliminating the staircase effect and assigning good depth value in each segmented region. The experimental results show that this scheme can achieve relatively high quality 3D stereoscopic video output. ? 2010 IEEE.EI
    • …
    corecore