4,053 research outputs found

    High dynamic range video merging, tone mapping, and real-time implementation

    Get PDF
    Although High Dynamic Range (High Dynamic Range (HDR)) imaging has been the subject of significant research over the past fifteen years, the goal of cinemaquality HDR video has not yet been achieved. This work references an optical method patented by Contrast Optical which is used to capture sequences of Low Dynamic Range (LDR) images that can be used to form HDR images as the basis for HDR video. Because of the large diverence in exposure spacing of the LDR images captured by this camera, present methods of merging LDR images are insufficient to produce cinema quality HDR images and video without significant visible artifacts. Thus the focus of the research presented is two fold. The first contribution is a new method of combining LDR images with exposure differences of greater than 3 stops into an HDR image. The second contribution is a method of tone mapping HDR video which solves potential problems of HDR video flicker and automated parameter control of the tone mapping operator. A prototype of this HDR video capture technique along with the combining and tone mapping algorithms have been implemented in a high-definition HDR-video system. Additionally, Field Programmable Gate Array (FPGA) hardware implementation details are given to support real time HDR video. Still frames from the acquired HDR video system which have been merged used the merging and tone mapping techniques will be presented

    Multi-Camera Platform for Panoramic Real-Time HDR Video Construction and Rendering

    Get PDF
    High dynamic range (HDR) images are usually obtained by capturing several images of the scene at different exposures. Previous HDR video techniques adopted the same principle by stacking HDR frames in time domain. We designed a new multi-camera platform which is able to construct and render HDR panoramic video in real-time, with 1024 × 256 resolution and a frame rate of 25 fps. We exploit the overlapping fields-of-view between the cameras with different exposures to create an HDR radiance map. We propose a method for HDR frame reconstruction which merges the previous HDR imaging techniques with the algorithms for panorama reconstruction. The developed FPGA-based processing system is able to reconstruct the HDR frame using the proposed method and tone map the resulting image using a hardware-adapted global operator. The measured throughput of the system is 245 MB/s, which is, up to our knowledge, among the fastest HDR video processing systems

    Overview of ghost correction for HDR video stream generation

    No full text
    International audienceMost digital cameras use low dynamic range image sensors, these LDR sensors can capture only a limited luminance dynamic range of the scene[1], to about two orders of magnitude (about 256 to 1024 levels). However, the dynamic range of real-world scenes varies over several orders of magnitude (10.000 levels). To overcome this limitation, several methods exist for creating high dynamic range (HDR) image (expensive method uses dedicated HDR image sensor and low-cost solutions using a conventional LDR image sensor). Large number of low-cost solutions applies a temporal exposure bracketing. The HDR image may be constructed with a HDR standard method (an additional step called tone mapping is required to display the HDR image on conventional system), or by fusing LDR images in different exposures time directly, providing HDR-like[2] images which can be handled directly by LDR image monitors. Temporal exposure bracketing solution is used for static scenes but it cannot be applied directly for dynamic scenes or HDR videos since camera or object motion in bracketed exposures creates artifacts called ghost[3], in HDR image. There are a several technics allowing the detection and removing ghost artifacts (Variance based ghost detection, Entropy based ghost detection, Bitmap based ghost detection, Graph-Cuts based ghost detection …) [4], nevertheless most of these methods are expensive in calculating time and they cannot be considered for real-time implementations. The originality and the final goal of our work are to upgrade our current smart camera allowing HDR video stream generation with a sensor full-resolution (1280x1024) at 60 fps [5]. The HDR stream is performed using exposure bracketing techniques (obtained with conventional LDR image sensor) combined with a tone mapping algorithm. In this paper, we propose an overview of the different methods to correct ghost artifacts which are available in the state of art. The selection of algorithms is done concerning our final goal which is real-time hardware implementation of the ghost detection and removing phases.

    Tone mapping in video conference systems

    Get PDF
    Normal sensors are able to only capture a limited dynamic range. In scenes with large dynamic range, such as situations with both dark indoor and bright outdoor parts, the image will get either over- or under exposed if the exposure is not perfect. Producing high dynamic range (HDR) images will capture the full dynamic range of the scene. There are two main ways of producing HDR images. One combines multiple exposures with a low dynamic range (LDR) sensor. Another is to use a sensors which are able to capture a higher dynamic range, so called wide dynamic range sensors.Multiple exposures with a single low dynamic range sensor, is not suitable for real time video because this technique have large problems with movement. Wide dynamic range sensors only require one exposure, but these have difficulties in normal situations were LDR sensors are sufficient. A type of algorithms called tone mapping are used to reduce the high dynamic range image to at the limitations of normal monitors. Simulations show that using these algorithms on low dynamic range images will change the illumination of the scene, solving the problem. Tone mapping algorithms presented in the literature are software algorithms. Two groups of algorithms exist; local and global tone mappers. Local algorithms are time consuming, and require large amounts of memory. They are not suitable for real time implementations since they rely on filtering operations for each pixel. Global algorithms, does not rely on filtering and are less time consuming. A precomputed curve is used to map the pixels to new values. This makes the global algorithms more suitable for video. A reduced tone mapping system is presented. This reduction results in a segmented curve, which drastically reduces the memory required for defining the curve. It also makes it feasible to control temporal changes. The reduced system has been successfully implemented, achieving sufficient frequencies to be part of a real time system

    Fully-automatic inverse tone mapping algorithm based on dynamic mid-level tone mapping

    Get PDF
    High Dynamic Range (HDR) displays can show images with higher color contrast levels and peak luminosities than the common Low Dynamic Range (LDR) displays. However, most existing video content is recorded and/or graded in LDR format. To show LDR content on HDR displays, it needs to be up-scaled using a so-called inverse tone mapping algorithm. Several techniques for inverse tone mapping have been proposed in the last years, going from simple approaches based on global and local operators to more advanced algorithms such as neural networks. Some of the drawbacks of existing techniques for inverse tone mapping are the need for human intervention, the high computation time for more advanced algorithms, limited low peak brightness, and the lack of the preservation of the artistic intentions. In this paper, we propose a fully-automatic inverse tone mapping operator based on mid-level mapping capable of real-time video processing. Our proposed algorithm allows expanding LDR images into HDR images with peak brightness over 1000 nits, preserving the artistic intentions inherent to the HDR domain. We assessed our results using the full-reference objective quality metrics HDR-VDP-2.2 and DRIM, and carrying out a subjective pair-wise comparison experiment. We compared our results with those obtained with the most recent methods found in the literature. Experimental results demonstrate that our proposed method outperforms the current state-of-the-art of simple inverse tone mapping methods and its performance is similar to other more complex and time-consuming advanced techniques

    Live demonstration: Real-time high dynamic range video acquisition using in-pixel adaptive content-aware tone mapping compression

    Get PDF
    This demonstration targets the acquisition of realtime video sequences involving High Dynamic Range (HDR) scenes. Adaptation to different illumination conditions while preserving contrast is achieved by using a sensor chip, which implements an adaptive content-aware tone mapping compression algorithm by using in-pixel circuitry. Its response gets adapted to changing illumination conditions by using at each frame a statistical estimation of the light distribution, which is derived from the HDR histogram calculated at the previous frame. This method allows adaptive HDR video, while capable to capture very large DR scenes including moving objects.Office of Naval Research (USA) N000141410355Ministerio de Economía y Competitividad IPT-2011-1625-430000Junta de Andalucía TIC 2338-201

    Geometric-based Line Segment Tracking for HDR Stereo Sequences

    Get PDF
    In this work, we propose a purely geometrical approach for the robust matching of line segments for challenging stereo streams with severe illumination changes or High Dynamic Range (HDR) environments. To that purpose, we exploit the univocal nature of the matching problem, i.e. every observation must be corresponded with a single feature or not corresponded at all. We state the problem as a sparse, convex, `1-minimization of the matching vector regularized by the geometric constraints. This formulation allows for the robust tracking of line segments along sequences where traditional appearance-based matching techniques tend to fail due to dynamic changes in illumination conditions. Moreover, the proposed matching algorithm also results in a considerable speed-up of previous state of the art techniques making it suitable for real-time applications such as Visual Odometry (VO). This, of course, comes at expense of a slightly lower number of matches in comparison with appearance based methods, and also limits its application to continuous video sequences, as it is rather constrained to small pose increments between consecutive frames.We validate the claimed advantages by first evaluating the matching performance in challenging video sequences, and then testing the method in a benchmarked point and line based VO algorithm.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech.This work has been supported by the Spanish Government (project DPI2017-84827-R and grant BES-2015-071606) and by the Andalucian Government (project TEP2012-530)
    corecore