510 research outputs found

    Põhjalik uuring ülisuure dünaamilise ulatusega piltide toonivastendamisest koos subjektiivsete testidega

    Get PDF
    A high dynamic range (HDR) image has a very wide range of luminance levels that traditional low dynamic range (LDR) displays cannot visualize. For this reason, HDR images are usually transformed to 8-bit representations, so that the alpha channel for each pixel is used as an exponent value, sometimes referred to as exponential notation [43]. Tone mapping operators (TMOs) are used to transform high dynamic range to low dynamic range domain by compressing pixels so that traditional LDR display can visualize them. The purpose of this thesis is to identify and analyse differences and similarities between the wide range of tone mapping operators that are available in the literature. Each TMO has been analyzed using subjective studies considering different conditions, which include environment, luminance, and colour. Also, several inverse tone mapping operators, HDR mappings with exposure fusion, histogram adjustment, and retinex have been analysed in this study. 19 different TMOs have been examined using a variety of HDR images. Mean opinion score (MOS) is calculated on those selected TMOs by asking the opinion of 25 independent people considering candidates’ age, vision, and colour blindness

    Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline

    Full text link
    Recovering a high dynamic range (HDR) image from a single low dynamic range (LDR) input image is challenging due to missing details in under-/over-exposed regions caused by quantization and saturation of camera sensors. In contrast to existing learning-based methods, our core idea is to incorporate the domain knowledge of the LDR image formation pipeline into our model. We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization. We then propose to learn three specialized CNNs to reverse these steps. By decomposing the problem into specific sub-tasks, we impose effective physical constraints to facilitate the training of individual sub-networks. Finally, we jointly fine-tune the entire model end-to-end to reduce error accumulation. With extensive quantitative and qualitative experiments on diverse image datasets, we demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.Comment: CVPR 2020. Project page: https://www.cmlab.csie.ntu.edu.tw/~yulunliu/SingleHDR Code: https://github.com/alex04072000/SingleHD

    The effect of image size on the color appearance of image reproductions

    Get PDF
    Original and reproduced art are usually viewed under quite different viewing conditions. One of the interesting differences in viewing condition is size difference. The main focus of this research was investigation of the effect of image size on color perception of rendered images. This research had several goals. The first goal was to develop an experimental paradigm for measuring the effect of image size on color appearance. The second goal was to identify the most affected image attributes for changes of image size. The final goal was to design and evaluate algorithms to compensate for the change of visual angle (size). To achieve the first goal, an exploratory experiment was performed using a colorimetrically characterized digital projector and LCD. The projector and LCD were light emitting devices and in this sense were similar soft-copy media. The physical sizes of the reproduced images on the LCD and projector screen could be very different. Additionally, one could benefit from flexibility of soft-copy reproduction devices such as real-time image rendering, which is essential for adjustment experiments. The capability of the experimental paradigm in revealing the change of appearance for a change of visual angle (size) was demonstrated by conducting a paired-comparison experiment. Through contrast matching experiments, achromatic and chromatic contrast and mean luminance of an image were identified as the most affected attributes for changes of image size. Measurement of the extent and trend of changes for each attribute were measured using matching experiments. Proper algorithms to compensate for the image size effect were design and evaluated. The correction algorithms were tested versus traditional colorimetric image rendering using a paired-comparison technique. The paired-comparison results confirmed superiority of the algorithms over the traditional colorimetric image rendering for the size effect compensation

    A new in-camera color imaging model for computer vision

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore