7 research outputs found

    Implementation of a distributed real-time video panorama pipeline for creating high quality virtual views

    Get PDF
    Today, we are continuously looking for more immersive video systems. Such systems, however, require more content, which can be costly to produce. A full panorama, covering regions of interest, can contain all the information required, but can be difficult to view in its entirety. In this thesis, we discuss a method for creating virtual views from a cylindrical panorama, allowing multiple users to create individual virtual cameras from the same panorama video. We discuss how this method can be used for video delivery, but emphasize on the creation of the initial panorama. The panorama must be created in real-time, and with very high quality. We design and implement a prototype recording pipeline, installed at a soccer stadium, as a part of the Bagadus project. We describe a pipeline capable of producing 4K panorama videos from five HD cameras, in real-time, with possibilities for further upscaling. We explain how the cylindrical panorama can be created, with minimal computational cost and without visible seams. The cameras of our prototype system record video in the incomplete Bayer format, and we also investigate which debayering algorithms are best suited for recording multiple high resolution video streams in real-time

    Capture module for IP Elphel 353 cameras. Evaluation of performance

    Get PDF
    This project's main objective is to evaluate the performance of the capture made by Elphel 353 camera. The assessment will serve as a starting point for the integration of these cameras in the new Smart room built in the Department of Signal Theory and Communications ,UPC (Universitat Politècnica de Catalunya). First the most important properties of the camera and the tools provided by the camera are described, Next, we study how to use these tools to get images in two capture modes, online and offline. Once we know how to get images, we define the methods used for evaluation, finally the capture is evaluated and the results and conclusions are presented

    Real-Time Algorithms for High Dynamic Range Video

    Full text link
    A recurring problem in capturing video is the scene having a range of brightness values that exceeds the capabilities of the capturing device. An example would be a video camera in a bright outside area, directed at the entrance of a building. Because of the potentially big brightness difference, it may not be possible to capture details of the inside of the building and the outside simultaneously using just one shutter speed setting. This results in under- and overexposed pixels in the video footage. The approach we follow in this thesis to overcome this problem is temporal exposure bracketing, i.e., using a set of images captured in quick sequence at different shutter settings. Each image then captures one facet of the scene's brightness range. When fused together, a high dynamic range (HDR) video frame is created that reveals details in dark and bright regions simultaneously. The process of creating a frame in an HDR video can be thought of as a pipeline where the output of each step is the input to the subsequent one. It begins by capturing a set of regular images using varying shutter speeds. Next, the images are aligned with respect to each other to compensate for camera and scene motion during capture. The aligned images are then merged together to create a single HDR frame containing accurate brightness values of the entire scene. As a last step, the HDR frame is tone mapped in order to be displayable on a regular screen with a lower dynamic range. This thesis covers algorithms for these steps that allow the creation of HDR video in real-time. When creating videos instead of still images, the focus lies on high capturing and processing speed and on assuring temporal consistency between the video frames. In order to achieve this goal, we take advantage of the knowledge gained from the processing of previous frames in the video. This work addresses the following aspects in particular. The image size parameters for the set of base images are chosen such that only as little image data as possible is captured. We make use of the fact that it is not always necessary to capture full size images when only small portions of the scene require HDR. Avoiding redundancy in the image material is an obvious approach to reducing the overall time taken to generate a frame. With the aid of the previous frames, we calculate brightness statistics of the scene. The exposure values are chosen in a way, such that frequently occurring brightness values are well-exposed in at least one of the images in the sequence. The base images from which the HDR frame is created are captured in quick succession. The effects of intermediate camera motion are thus less intense than in the still image case, and a comparably simpler camera motion model can be used. At the same time, however, there is much less time available to estimate motion. For this reason, we use a fast heuristic that makes use of the motion information obtained in previous frames. It is robust to the large brightness difference between the images of an exposure sequence. The range of luminance values of an HDR frame must be tone mapped to the displayable range of the output device. Most available tone mapping operators are designed for still images and scale the dynamic range of each frame independently. In situations where the scene's brightness statistics change quickly, these operators produce visible image flicker. We have developed an algorithm that detects such situations in an HDR video. Based on this detection, a temporal stability criterion for the tone mapping parameters then prevents image flicker. All methods for capture, creation and display of HDR video introduced in this work have been fully implemented, tested and integrated into a running HDR video system. The algorithms were analyzed for parallelizability and, if applicable, adjusted and implemented on a high-performance graphics chip

    Bagadus App: Notational data capture and instant video analysis using mobile devices

    Get PDF
    Enormous amounts of money and other resources are poured into professional soccer today. Teams will do anything to get a competitive advantage, including investing heavily in new technology for player development and analysis. In this thesis, we investigate and implement an instant analytical system that captures sports notational data and combines it with high-quality virtual view video from the Bagadus system, removing the manual labor of traditional video analysis. We present a multi-platform mobile application and a playback system, which together act as a state-of-the-art analytical tool providing soccer experts with the means of capturing annotations and immediately play back zoomable and pannable video on stadium big screens, computers and mobile devices. By controlling remote playback and drawing on video through the app, sports professionals can provide instant, video-backed analysis of interesting situations on the pitch to players, analysts or even spectators. We investigate how to best design, implement and combine these components into a Instant Replay Analytical Subsystem for the Bagadus project to create an automated way of viewing and controlling video based on annotations. We describe how the system is optimized in terms of performance, to achieve real-time video control and drawing; scalability, by minimizing network data and memory usage; and usability, through a user tested interface optimized for accuracy and speed for notational data capture, as well as user customization based on roles and easy filtering of annotations. The system has been tested and adapted through real life scenarios at Alfheim Stadium for Tromsø Idrettslag (TIL) and at Ullevaal Stadion for the Norway national football team

    Bagadus App: Notational data capture and instant video analysis using mobile devices

    Get PDF
    Enormous amounts of money and other resources are poured into professional soccer today. Teams will do anything to get a competitive advantage, including investing heavily in new technology for player development and analysis. In this thesis, we investigate and implement an instant analytical system that captures sports notational data and combines it with high-quality virtual view video from the Bagadus system, removing the manual labor of traditional video analysis. We present a multi-platform mobile application and a playback system, which together act as a state-of-the-art analytical tool providing soccer experts with the means of capturing annotations and immediately play back zoomable and pannable video on stadium big screens, computers and mobile devices. By controlling remote playback and drawing on video through the app, sports professionals can provide instant, video-backed analysis of interesting situations on the pitch to players, analysts or even spectators. We investigate how to best design, implement and combine these components into a Instant Replay Analytical Subsystem for the Bagadus project to create anautomated way of viewing and controlling video based on annotations. We describe how the system is optimized in terms of performance, to achieve real-time video control and drawing; scalability, by minimizing network data and memory usage; and usability, through a user-tested interface optimized for accuracy and speed for notational data capture, as well as user customization based on roles and easy filtering of annotations. The system has been tested and adapted through real life scenarios at Alfheim Stadium for Tromsø Idrettslag (TIL) and at Ullevaal Stadion for the Norway national football team

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    An Evaluation of Debayering Algorithms on GPU for Real-Time Panoramic Video Recording

    No full text
    corecore