187 research outputs found

    Testing HDR image rendering algorithms

    Get PDF
    Eight high-dynamic-range image rendering algorithms were tested using ten high-dynamic-range pictorial images. A large-scale paired comparison psychophysical experiment was developed containing two sections, comparing the overall rendering performances and grayscale tone mapping performance respectively. An interval scale of preference was created to evaluate the rendering results. The results showed the consistency of tone-mapping performance with the overall rendering results, and illustrated that Durand and Dorsey’s bilateral fast filtering technique and Reinhard’s photographic tone reproduction have the best rendering performance overall. The goal of this experiment was to establish a sound testing and evaluation methodology based on psychophysical experiment results for future research on accuracy of rendering algorithms

    Evaluation of the color image and video processing chain and visual quality management for consumer systems

    Get PDF
    With the advent of novel digital display technologies, color processing is increasingly becoming a key aspect in consumer video applications. Today’s state-of-the-art displays require sophisticated color and image reproduction techniques in order to achieve larger screen size, higher luminance and higher resolution than ever before. However, from color science perspective, there are clearly opportunities for improvement in the color reproduction capabilities of various emerging and conventional display technologies. This research seeks to identify potential areas for improvement in color processing in a video processing chain. As part of this research, various processes involved in a typical video processing chain in consumer video applications were reviewed. Several published color and contrast enhancement algorithms were evaluated, and a novel algorithm was developed to enhance color and contrast in images and videos in an effective and coordinated manner. Further, a psychophysical technique was developed and implemented for performing visual evaluation of color image and consumer video quality. Based on the performance analysis and visual experiments involving various algorithms, guidelines were proposed for the development of an effective color and contrast enhancement method for images and video applications. It is hoped that the knowledge gained from this research will help build a better understanding of color processing and color quality management methods in consumer video

    Non-Iterative Tone Mapping With High Efficiency and Robustness

    Get PDF
    This paper proposes an efficient approach for tone mapping, which provides a high perceptual image quality for diverse scenes. Most existing methods, optimizing images for the perceptual model, use an iterative process and this process is time consuming. To solve this problem, we proposed a new layer-based non-iterative approach to finding an optimal detail layer for generating a tone-mapped image. The proposed method consists of the following three steps. First, an image is decomposed into a base layer and a detail layer to separate the illumination and detail components. Next, the base layer is globally compressed by applying the statistical naturalness model based on the statistics of the luminance and contrast in the natural scenes. The detail layer is locally optimized based on the structure fidelity measure, representing the degree of local structural detail preservation. Finally, the proposed method constructs the final tone-mapped image by combining the resultant layers. The performance evaluation reveals that the proposed method outperforms the benchmarking methods for almost all the benchmarking test images. Specifically, the proposed method improves an average tone mapping quality index-II (TMQI-II), a feature similarity index for tone-mapped images (FSITM), and a high-dynamic range-visible difference predictor (HDR-VDP)-2.2 by up to 0.651 (223.4%), 0.088 (11.5%), and 10.371 (25.2%), respectively, compared with the benchmarking methods, whereas it improves the processing speed by over 2611 times. Furthermore, the proposed method decreases the standard deviations of TMQI-II, FSITM, and HDR-VDP-2.2, and processing time by up to 81.4%, 18.9%, 12.6%, and 99.9%, respectively, when compared with the benchmarking methods.11Ysciescopu

    Multi-Modal Enhancement Techniques for Visibility Improvement of Digital Images

    Get PDF
    Image enhancement techniques for visibility improvement of 8-bit color digital images based on spatial domain, wavelet transform domain, and multiple image fusion approaches are investigated in this dissertation research. In the category of spatial domain approach, two enhancement algorithms are developed to deal with problems associated with images captured from scenes with high dynamic ranges. The first technique is based on an illuminance-reflectance (I-R) model of the scene irradiance. The dynamic range compression of the input image is achieved by a nonlinear transformation of the estimated illuminance based on a windowed inverse sigmoid transfer function. A single-scale neighborhood dependent contrast enhancement process is proposed to enhance the high frequency components of the illuminance, which compensates for the contrast degradation of the mid-tone frequency components caused by dynamic range compression. The intensity image obtained by integrating the enhanced illuminance and the extracted reflectance is then converted to a RGB color image through linear color restoration utilizing the color components of the original image. The second technique, named AINDANE, is a two step approach comprised of adaptive luminance enhancement and adaptive contrast enhancement. An image dependent nonlinear transfer function is designed for dynamic range compression and a multiscale image dependent neighborhood approach is developed for contrast enhancement. Real time processing of video streams is realized with the I-R model based technique due to its high speed processing capability while AINDANE produces higher quality enhanced images due to its multi-scale contrast enhancement property. Both the algorithms exhibit balanced luminance, contrast enhancement, higher robustness, and better color consistency when compared with conventional techniques. In the transform domain approach, wavelet transform based image denoising and contrast enhancement algorithms are developed. The denoising is treated as a maximum a posteriori (MAP) estimator problem; a Bivariate probability density function model is introduced to explore the interlevel dependency among the wavelet coefficients. In addition, an approximate solution to the MAP estimation problem is proposed to avoid the use of complex iterative computations to find a numerical solution. This relatively low complexity image denoising algorithm implemented with dual-tree complex wavelet transform (DT-CWT) produces high quality denoised images

    Toward a mathematical theory of perception

    Get PDF
    A new technique for the modelling of perceptual systems called formal modelling is developed. This technique begins with qualitative observations about the perceptual system, the so-called perceptual symmetries, to obtain through mathematical analysis certain model structures which may then be calibrated by experiment. The analysis proceeds in two different ways depending upon the choice of linear or nonlinear models. For the linear case, the analysis proceeds through the methods of unitary representation theory. It begins with a unitary group representation on the image space and produces what we have called the fundamental structure theorem. For the nonlinear case, the analysis makes essential use of infinite-dimensional manifold theory. It begins with a Lie group action on an image manifold and produces the fundamental structure formula. These techniques will be used to study the brightness perception mechanism of the human visual system. Several visual groups are defined and their corresponding structures for visual system models are obtained. A new transform called the Mandala transform will be deduced from a certain visual group and its implications for image processing will be discussed. Several new phenomena of brightness perception will be presented. New facts about the Mach band illusion along with new adaptation phenomena will be presented. Also a new visual illusion will be presented. A visual model based on the above techniques will be presented. It will also be shown how use of statistical estimation theory can be made in the study of contrast adaptation. Furthermore, a mathematical interpretation of unconscious inference and a simple explanation of the Tolhurst effect without mutual channel inhibition will be given. Finally, image processing algorithms suggested by the model will be used to process a real-world image for enhancement and for "form" and texture extraction

    Division Gets Better: Learning Brightness-Aware and Detail-Sensitive Representations for Low-Light Image Enhancement

    Full text link
    Low-light image enhancement strives to improve the contrast, adjust the visibility, and restore the distortion in color and texture. Existing methods usually pay more attention to improving the visibility and contrast via increasing the lightness of low-light images, while disregarding the significance of color and texture restoration for high-quality images. Against above issue, we propose a novel luminance and chrominance dual branch network, termed LCDBNet, for low-light image enhancement, which divides low-light image enhancement into two sub-tasks, e.g., luminance adjustment and chrominance restoration. Specifically, LCDBNet is composed of two branches, namely luminance adjustment network (LAN) and chrominance restoration network (CRN). LAN takes responsibility for learning brightness-aware features leveraging long-range dependency and local attention correlation. While CRN concentrates on learning detail-sensitive features via multi-level wavelet decomposition. Finally, a fusion network is designed to blend their learned features to produce visually impressive images. Extensive experiments conducted on seven benchmark datasets validate the effectiveness of our proposed LCDBNet, and the results manifest that LCDBNet achieves superior performance in terms of multiple reference/non-reference quality evaluators compared to other state-of-the-art competitors. Our code and pretrained model will be available.Comment: 14 pages, 16 figure

    Photographic tone reproduction for digital images

    Get PDF
    technical reportA classic photographic task is the mapping of the potentially high dynamic range of real world luminances to the low dynamic range of the photographic print. This tone reproduction problem is also faced by computer graphics practitioners who must map digital images to a low dynamic range print or screen. The work presented in this paper leverages the time-tested techniques of photographic practice to develop a new tone reproduction operator. In particular, we use and extend the techniques developed by Ansel Adams to deal with digital images. The resulting algorithm is simple and is shown to produce good results for the wide variety of images that we have tested

    Fearless Luminance Adaptation: A Macro-Micro-Hierarchical Transformer for Exposure Correction

    Full text link
    Photographs taken with less-than-ideal exposure settings often display poor visual quality. Since the correction procedures vary significantly, it is difficult for a single neural network to handle all exposure problems. Moreover, the inherent limitations of convolutions, hinder the models ability to restore faithful color or details on extremely over-/under- exposed regions. To overcome these limitations, we propose a Macro-Micro-Hierarchical transformer, which consists of a macro attention to capture long-range dependencies, a micro attention to extract local features, and a hierarchical structure for coarse-to-fine correction. In specific, the complementary macro-micro attention designs enhance locality while allowing global interactions. The hierarchical structure enables the network to correct exposure errors of different scales layer by layer. Furthermore, we propose a contrast constraint and couple it seamlessly in the loss function, where the corrected image is pulled towards the positive sample and pushed away from the dynamically generated negative samples. Thus the remaining color distortion and loss of detail can be removed. We also extend our method as an image enhancer for low-light face recognition and low-light semantic segmentation. Experiments demonstrate that our approach obtains more attractive results than state-of-the-art methods quantitatively and qualitatively.Comment: Accepted by ACM MM 202

    低露出画像に対するノイズを考慮した画像強調法

    Get PDF
    A new method of contrast enhancement for underexposed images, in which lots of noise are hidden in an image, is proposed in this paper. Under low light conditions, images taken by digital cameras have low contrast in dark or bright regions. This is due to a limited dynamic range that imaging sensors have. For these reasons, various contrast enhancement methods have been proposed so far. These methods, however, have two problems: (1) The loss of details in bright regions due to over-enhancement of contrast. (2) The noise is amplified in dark regions because they do not consider noise included images. The proposed method overcomes these problems by using the shadow-up function is applied to adaptive gamma correction with weighting distribution. A denoising filter is also used to avoid noise being amplified in dark regions. As a result, it is concluded that the proposed method allows not only to enhance contrast of dark region, but also to avoid amplifying noise, even under strong noise environments.首都大学東京, 2019-03-25, 修士(工学)首都大学東
    corecore