268 research outputs found

    Free Viewpoint Video Based on Stitching Technique

    Get PDF
    Image stitching is a technique used for creating one panoramic scene from multiple images. It is used in panoramic photography and video where the viewer can only scroll horizontally and vertically across the scene. However, stitching has not been used for creating free-viewpoint videos (FVV) where viewers can change their viewing points freely and smoothly while playing the video. current research, implemented FVV playing system using image stitching, this system allows users to enjoy the capability of moving their viewpoint freely and smoothly. To develop this system, user should capture MVV from different viewpoints and with appropriate region area for each pair of cameras then the system stitch the overlapped video to create stitched video/videos to display it in FVV playing system with applying freely and smoothly switching and interpolation of viewpoints over video playback. Current research evaluated the performance of video playing system based on system idea, system accuracy, smoothness, and user satisfaction. The results of evaluation have been very positive in most aspects

    A Measurement Study of Live 360 Video Streaming Systems

    Get PDF
    360-degree live video streaming is becoming increasingly popular. While providing viewers with enriched experience, 360-degree live video streaming is challenging to achieve since it requires a significantly higher bandwidth and a powerful computation infrastructure. A deeper understanding of this emerging system would benefit both viewers and system designers. Although prior works have extensively studied regular video streaming and 360-degree video on demand streaming, we for the first time investigate the performance of 360-degree live video streaming. We conduct a systematic measurement of YouTube’s 360-degree live video streaming using various metrics in multiple practical settings. Our research insight will help to build a clear understanding of today’s 360-degree live video streaming and lay a foundation for future research on this emerging yet relatively unexplored area. To further understand the delay measured in YouTube’s 360-degree live video streaming, we conduct the second measurement study on a 360-degree live video streaming platform. While live 360-degree video streaming provides an enriched viewing experience, it is challenging to guarantee the user experience against the negative effects introduced by start-up delay, event-to-eye delay, and low frame rate. It is therefore imperative to understand how different computing tasks of a live 360-degree streaming system contribute to these three delay metrics. Our measurement provide insights for future research directions towards improving the user experience of live 360-degree video streaming. Based on our measurement results, we propose a motion-based trajectory transmission method for 360-degree video streaming. First, we design a testbed for 360-degree video playback. The testbed can collect the users viewing data in real time. Then we analyze the trajectories of the moving targets in the 360-degree videos. Specifically, we utilize optical flow algorithms and gaussian mixture model to pinpoint the trajectories. Then we choose the trajectories to be delivered based on the size of the moving targets. The experiment results indicates that our method can obviously reduce the bandwidth consumption

    Implementation of a distributed real-time video panorama pipeline for creating high quality virtual views

    Get PDF
    Today, we are continuously looking for more immersive video systems. Such systems, however, require more content, which can be costly to produce. A full panorama, covering regions of interest, can contain all the information required, but can be difficult to view in its entirety. In this thesis, we discuss a method for creating virtual views from a cylindrical panorama, allowing multiple users to create individual virtual cameras from the same panorama video. We discuss how this method can be used for video delivery, but emphasize on the creation of the initial panorama. The panorama must be created in real-time, and with very high quality. We design and implement a prototype recording pipeline, installed at a soccer stadium, as a part of the Bagadus project. We describe a pipeline capable of producing 4K panorama videos from five HD cameras, in real-time, with possibilities for further upscaling. We explain how the cylindrical panorama can be created, with minimal computational cost and without visible seams. The cameras of our prototype system record video in the incomplete Bayer format, and we also investigate which debayering algorithms are best suited for recording multiple high resolution video streams in real-time

    Mobile graphics: SIGGRAPH Asia 2017 course

    Get PDF
    Peer ReviewedPostprint (published version

    PanoDepth - Panoramic Monocular Depth Perception Model and Framework

    Get PDF
    Depth perception has become a heavily researched area as companies and researchers are striving towards the development of self-driving cars. Self-driving cars rely on perceiving the surrounding area, which heavily depends on technology capable of providing the system with depth perception capabilities. In this paper, we explore developing a single camera (monocular) depth prediction model that is trained on panoramic depth images. Our model makes novel use of transfer learning efficient encoder models, pre-training on a larger dataset of flat depth images, and optimizing the model for use with a Jetson Nano. Additionally, we present a training and optimization framework to make developing and testing new monocular depth perception models easier and faster. While the model failed to achieve a high frame rate, the framework and models developed are a promising starting place for future work
    • …
    corecore