8,255 research outputs found

    Hierarchical Hole-filling For Depth-based View Synthesis In Ftv And 3d Video

    Get PDF
    Methods for hierarchical hole-filling and depth adaptive hierarchical hole-filling and error correcting in 2D images, 3D images, and 3D wrapped images are provided. Hierarchical hole-filling can comprise reducing an image that contains holes, expanding the reduced image, and filling the holes in the image with data obtained from the expanded image. Depth adaptive hierarchical hole-filling can comprise preprocessing the depth map of a 3D wrapped image that contains holes, reducing the preprocessed image, expanding the reduced image, and filling the holes in the 3D wrapped image with data obtained from the expanded image. These methods are can efficiently reduce errors in images and produce 3D images from a 2D images and/or depth map information.Georgia Tech Research Corporatio

    Peripatetic electronic teachers in higher education

    Get PDF
    This paper explores the idea of information and communications technology providing a medium enabling higher education teachers to act as freelance agents. The notion of a ‘Peripatetic Electronic Teacher’ (PET) is introduced to encapsulate this idea. PETs would exist as multiple telepresences (pedagogical, professional, managerial and commercial) in PET‐worlds; global networked environments which support advanced multimedia features. The central defining rationale of a pedagogical presence is described in detail and some implications for the adoption of the PET‐world paradigm are discussed. The ideas described in this paper were developed by the author during a recently completed Short‐Term British Telecom Research Fellowship, based at the BT Adastral Park

    Color Correction and Depth Based Hierarchical Hole Filling in Free Viewpoint Generation

    Get PDF

    Advanced Free Viewpoint Video Streaming Techniques

    Get PDF
    Free-viewpoint video is a new type of interactive multimedia service allowing users to control their viewpoint and generate new views of a dynamic scene from any perspective. The uniquely generated and displayed views are composed from two or more high bitrate camera streams that must be delivered to the users depending on their continuously changing perspective. Due to significant network and computational resource requirements, we proposed scalable viewpoint generation and delivery schemes based on multicast forwarding and distributed approach. Our aim was to find the optimal deployment locations of the distributed viewpoint synthesis processes in the network topology by allowing network nodes to act as proxy servers with caching and viewpoint synthesis functionalities. Moreover, a predictive multicast group management scheme was introduced in order to provide all camera views that may be requested in the near future and prevent the viewpoint synthesizer algorithm from remaining without camera streams. The obtained results showed that even 42% traffic decrease can be realized using distributed viewpoint synthesis and the probability of viewpoint synthesis starvation can be also significantly reduced in future free viewpoint video services

    Providing 3D video services: the challenge from 2D to 3DTV quality of experience

    Get PDF
    Recently, three-dimensional (3D) video has decisively burst onto the entertainment industry scene, and has arrived in households even before the standardization process has been completed. 3D television (3DTV) adoption and deployment can be seen as a major leap in television history, similar to previous transitions from black and white (B&W) to color, from analog to digital television (TV), and from standard definition to high definition. In this paper, we analyze current 3D video technology trends in order to define a taxonomy of the availability and possible introduction of 3D-based services. We also propose an audiovisual network services architecture which provides a smooth transition from two-dimensional (2D) to 3DTV in an Internet Protocol (IP)-based scenario. Based on subjective assessment tests, we also analyze those factors which will influence the quality of experience in those 3D video services, focusing on effects of both coding and transmission errors. In addition, examples of the application of the architecture and results of assessment tests are provided

    DĂŒnaamiline kiiruse jaotamine interaktiivses mitmevaatelises video vaatevahetuse ennustamineses

    Get PDF
    In Interactive Multi-View Video (IMVV), the video has been captured by numbers of cameras positioned in array and transmitted those camera views to users. The user can interact with the transmitted video content by choosing viewpoints (views from different cameras in the array) with the expectation of minimum transmission delay while changing among various views. View switching delay is one of the primary concern that is dealt in this thesis work, where the contribution is to minimize the transmission delay of new view switch frame through a novel process of selection of the predicted view and compression considering the transmission efficiency. Mainly considered a realtime IMVV streaming, and the view switch is mapped as discrete Markov chain, where the transition probability is derived using Zipf distribution, which provides information regarding view switch prediction. To eliminate Round-Trip Time (RTT) transmission delay, Quantization Parameters (QP) are adaptively allocated to the remaining redundant transmitted frames to maintain view switching time minimum, trading off with the quality of the video till RTT time-span. The experimental results of the proposed method show superior performance on PSNR and view switching delay for better viewing quality over the existing methods

    High-Level Synthesis Based VLSI Architectures for Video Coding

    Get PDF
    High Efficiency Video Coding (HEVC) is state-of-the-art video coding standard. Emerging applications like free-viewpoint video, 360degree video, augmented reality, 3D movies etc. require standardized extensions of HEVC. The standardized extensions of HEVC include HEVC Scalable Video Coding (SHVC), HEVC Multiview Video Coding (MV-HEVC), MV-HEVC+ Depth (3D-HEVC) and HEVC Screen Content Coding. 3D-HEVC is used for applications like view synthesis generation, free-viewpoint video. Coding and transmission of depth maps in 3D-HEVC is used for the virtual view synthesis by the algorithms like Depth Image Based Rendering (DIBR). As first step, we performed the profiling of the 3D-HEVC standard. Computational intensive parts of the standard are identified for the efficient hardware implementation. One of the computational intensive part of the 3D-HEVC, HEVC and H.264/AVC is the Interpolation Filtering used for Fractional Motion Estimation (FME). The hardware implementation of the interpolation filtering is carried out using High-Level Synthesis (HLS) tools. Xilinx Vivado Design Suite is used for the HLS implementation of the interpolation filters of HEVC and H.264/AVC. The complexity of the digital systems is greatly increased. High-Level Synthesis is the methodology which offers great benefits such as late architectural or functional changes without time consuming in rewriting of RTL-code, algorithms can be tested and evaluated early in the design cycle and development of accurate models against which the final hardware can be verified

    Estimation of signal distortion using effective sampling density for light field-based free viewpoint video

    Get PDF
    In a light field-based free viewpoint video (LF-based FVV) system, effective sampling density (ESD) is defined as the number of rays per unit area of the scene that has been acquired and is selected in the rendering process for reconstructing an unknown ray. This paper extends the concept of ESD and shows that ESD is a tractable metric that quantifies the joint impact of the imperfections of LF acquisition and rendering. By deriving and analyzing ESD for the commonly used LF acquisition and rendering methods, it is shown that ESD is an effective indicator determined by system parameters and can be used to directly estimate output video distortion without access to the ground truth. This claim is verified by extensive numerical simulations and comparison to PSNR. Furthermore, an empirical relationship between the output distortion (in PSNR) and the calculated ESD is established to allow direct assessment of the overall video distortion without an actual implementation of the system. A small scale subjective user study is also conducted which indicates a correlation of 0.91 between ESD and perceived quality
    • 

    corecore