8,997 research outputs found

    Fully-automatic inverse tone mapping algorithm based on dynamic mid-level tone mapping

    Get PDF
    High Dynamic Range (HDR) displays can show images with higher color contrast levels and peak luminosities than the common Low Dynamic Range (LDR) displays. However, most existing video content is recorded and/or graded in LDR format. To show LDR content on HDR displays, it needs to be up-scaled using a so-called inverse tone mapping algorithm. Several techniques for inverse tone mapping have been proposed in the last years, going from simple approaches based on global and local operators to more advanced algorithms such as neural networks. Some of the drawbacks of existing techniques for inverse tone mapping are the need for human intervention, the high computation time for more advanced algorithms, limited low peak brightness, and the lack of the preservation of the artistic intentions. In this paper, we propose a fully-automatic inverse tone mapping operator based on mid-level mapping capable of real-time video processing. Our proposed algorithm allows expanding LDR images into HDR images with peak brightness over 1000 nits, preserving the artistic intentions inherent to the HDR domain. We assessed our results using the full-reference objective quality metrics HDR-VDP-2.2 and DRIM, and carrying out a subjective pair-wise comparison experiment. We compared our results with those obtained with the most recent methods found in the literature. Experimental results demonstrate that our proposed method outperforms the current state-of-the-art of simple inverse tone mapping methods and its performance is similar to other more complex and time-consuming advanced techniques

    Non-Iterative Tone Mapping With High Efficiency and Robustness

    Get PDF
    This paper proposes an efficient approach for tone mapping, which provides a high perceptual image quality for diverse scenes. Most existing methods, optimizing images for the perceptual model, use an iterative process and this process is time consuming. To solve this problem, we proposed a new layer-based non-iterative approach to finding an optimal detail layer for generating a tone-mapped image. The proposed method consists of the following three steps. First, an image is decomposed into a base layer and a detail layer to separate the illumination and detail components. Next, the base layer is globally compressed by applying the statistical naturalness model based on the statistics of the luminance and contrast in the natural scenes. The detail layer is locally optimized based on the structure fidelity measure, representing the degree of local structural detail preservation. Finally, the proposed method constructs the final tone-mapped image by combining the resultant layers. The performance evaluation reveals that the proposed method outperforms the benchmarking methods for almost all the benchmarking test images. Specifically, the proposed method improves an average tone mapping quality index-II (TMQI-II), a feature similarity index for tone-mapped images (FSITM), and a high-dynamic range-visible difference predictor (HDR-VDP)-2.2 by up to 0.651 (223.4%), 0.088 (11.5%), and 10.371 (25.2%), respectively, compared with the benchmarking methods, whereas it improves the processing speed by over 2611 times. Furthermore, the proposed method decreases the standard deviations of TMQI-II, FSITM, and HDR-VDP-2.2, and processing time by up to 81.4%, 18.9%, 12.6%, and 99.9%, respectively, when compared with the benchmarking methods.11Ysciescopu

    Mixing tone mapping operators on the GPU by differential zone mapping based on psychophysical experiments

    Get PDF
    © 2016 In this paper, we present a new technique for displaying High Dynamic Range (HDR) images on Low Dynamic Range (LDR) displays in an efficient way on the GPU. The described process has three stages. First, the input image is segmented into luminance zones. Second, the tone mapping operator (TMO) that performs better in each zone is automatically selected. Finally, the resulting tone mapping (TM) outputs for each zone are merged, generating the final LDR output image. To establish the TMO that performs better in each luminance zone we conducted a preliminary psychophysical experiment using a set of HDR images and six different TMOs. We validated our composite technique on several (new) HDR images and conducted a further psychophysical experiment, using an HDR display as the reference that establishes the advantages of our hybrid three-stage approach over a traditional individual TMO. Finally, we present a GPU version, which is perceptually equal to the standard version but with much improved computational performance

    Contemplation of tone mapping operators in high dynamic range imaging

    Get PDF
    The technique of tone mapping has found widespread popularity in the modern era owing to its applications in the digital world. There are a considerable number of tone mapping techniques that have been developed so far. One method may be better than the other in some cases which is determined by the requirement of the user. In this paper, some of the techniques for tone mapping/tone reproduction of high dynamic range images have been contemplated. The classification of tone mapping operators has also been given. However, it has been found that these techniques lack in providing quality of service visualization of high dynamic range images. This paper has tried to highlight the drawbacks in the existing traditional methods so that the tone-mapped techniques can be enhanced

    Image enhancement methods and applications in computational photography

    Get PDF
    Computational photography is currently a rapidly developing and cutting-edge topic in applied optics, image sensors and image processing fields to go beyond the limitations of traditional photography. The innovations of computational photography allow the photographer not only merely to take an image, but also, more importantly, to perform computations on the captured image data. Good examples of these innovations include high dynamic range imaging, focus stacking, super-resolution, motion deblurring and so on. Although extensive work has been done to explore image enhancement techniques in each subfield of computational photography, attention has seldom been given to study of the image enhancement technique of simultaneously extending depth of field and dynamic range of a scene. In my dissertation, I present an algorithm which combines focus stacking and high dynamic range (HDR) imaging in order to produce an image with both extended depth of field (DOF) and dynamic range than any of the input images. In this dissertation, I also investigate super-resolution image restoration from multiple images, which are possibly degraded by large motion blur. The proposed algorithm combines the super-resolution problem and blind image deblurring problem in a unified framework. The blur kernel for each input image is separately estimated. I also do not make any restrictions on the motion fields among images; that is, I estimate dense motion field without simplifications such as parametric motion. While the proposed super-resolution method uses multiple images to enhance spatial resolution from multiple regular images, single image super-resolution is related to techniques of denoising or removing blur from one single captured image. In my dissertation, space-varying point spread function (PSF) estimation and image deblurring for single image is also investigated. Regarding the PSF estimation, I do not make any restrictions on the type of blur or how the blur varies spatially. Once the space-varying PSF is estimated, space-varying image deblurring is performed, which produces good results even for regions where it is not clear what the correct PSF is at first. I also bring image enhancement applications to both personal computer (PC) and Android platform as computational photography applications

    High Dynamic Range Imaging by Perceptual Logarithmic Exposure Merging

    Full text link
    In this paper we emphasize a similarity between the Logarithmic-Type Image Processing (LTIP) model and the Naka-Rushton model of the Human Visual System (HVS). LTIP is a derivation of the Logarithmic Image Processing (LIP), which further replaces the logarithmic function with a ratio of polynomial functions. Based on this similarity, we show that it is possible to present an unifying framework for the High Dynamic Range (HDR) imaging problem, namely that performing exposure merging under the LTIP model is equivalent to standard irradiance map fusion. The resulting HDR algorithm is shown to provide high quality in both subjective and objective evaluations.Comment: 14 pages 8 figures. Accepted at AMCS journa
    corecore