6,747 research outputs found

    High dynamic range imaging for archaeological recording

    No full text
    This paper notes the adoption of digital photography as a primary recording means within archaeology, and reviews some issues and problems that this presents. Particular attention is given to the problems of recording high-contrast scenes in archaeology and High Dynamic Range imaging using multiple exposures is suggested as a means of providing an archive of high-contrast scenes that can later be tone-mapped to provide a variety of visualisations. Exposure fusion is also considered, although it is noted that this has some disadvantages. Three case studies are then presented (1) a very high contrast photograph taken from within a rock-cut tomb at Cala Morell, Menorca (2) an archaeological test pitting exercise requiring rapid acquisition of photographic records in challenging circumstances and (3) legacy material consisting of three differently exposed colour positive (slide) photographs of the same scene. In each case, HDR methods are shown to significantly aid the generation of a high quality illustrative record photograph, and it is concluded that HDR imaging could serve an effective role in archaeological photographic recording, although there remain problems of archiving and distributing HDR radiance map data

    A Retinex-based Image Enhancement Scheme with Noise Aware Shadow-up Function

    Full text link
    This paper proposes a novel image contrast enhancement method based on both a noise aware shadow-up function and Retinex (retina and cortex) decomposition. Under low light conditions, images taken by digital cameras have low contrast in dark or bright regions. This is due to a limited dynamic range that imaging sensors have. For this reason, various contrast enhancement methods have been proposed. Our proposed method can enhance the contrast of images without not only over-enhancement but also noise amplification. In the proposed method, an image is decomposed into illumination layer and reflectance layer based on the retinex theory, and lightness information of the illumination layer is adjusted. A shadow-up function is used for preventing over-enhancement. The proposed mapping function, designed by using a noise aware histogram, allows not only to enhance contrast of dark region, but also to avoid amplifying noise, even under strong noise environments.Comment: To appear in IWAIT-IFMIA 201

    Fully-automatic inverse tone mapping algorithm based on dynamic mid-level tone mapping

    Get PDF
    High Dynamic Range (HDR) displays can show images with higher color contrast levels and peak luminosities than the common Low Dynamic Range (LDR) displays. However, most existing video content is recorded and/or graded in LDR format. To show LDR content on HDR displays, it needs to be up-scaled using a so-called inverse tone mapping algorithm. Several techniques for inverse tone mapping have been proposed in the last years, going from simple approaches based on global and local operators to more advanced algorithms such as neural networks. Some of the drawbacks of existing techniques for inverse tone mapping are the need for human intervention, the high computation time for more advanced algorithms, limited low peak brightness, and the lack of the preservation of the artistic intentions. In this paper, we propose a fully-automatic inverse tone mapping operator based on mid-level mapping capable of real-time video processing. Our proposed algorithm allows expanding LDR images into HDR images with peak brightness over 1000 nits, preserving the artistic intentions inherent to the HDR domain. We assessed our results using the full-reference objective quality metrics HDR-VDP-2.2 and DRIM, and carrying out a subjective pair-wise comparison experiment. We compared our results with those obtained with the most recent methods found in the literature. Experimental results demonstrate that our proposed method outperforms the current state-of-the-art of simple inverse tone mapping methods and its performance is similar to other more complex and time-consuming advanced techniques

    DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs

    Full text link
    We present a novel deep learning architecture for fusing static multi-exposure images. Current multi-exposure fusion (MEF) approaches use hand-crafted features to fuse input sequence. However, the weak hand-crafted representations are not robust to varying input conditions. Moreover, they perform poorly for extreme exposure image pairs. Thus, it is highly desirable to have a method that is robust to varying input conditions and capable of handling extreme exposure without artifacts. Deep representations have known to be robust to input conditions and have shown phenomenal performance in a supervised setting. However, the stumbling block in using deep learning for MEF was the lack of sufficient training data and an oracle to provide the ground-truth for supervision. To address the above issues, we have gathered a large dataset of multi-exposure image stacks for training and to circumvent the need for ground truth images, we propose an unsupervised deep learning framework for MEF utilizing a no-reference quality metric as loss function. The proposed approach uses a novel CNN architecture trained to learn the fusion operation without reference ground truth image. The model fuses a set of common low level features extracted from each image to generate artifact-free perceptually pleasing results. We perform extensive quantitative and qualitative evaluation and show that the proposed technique outperforms existing state-of-the-art approaches for a variety of natural images.Comment: ICCV 201
    corecore