79,418 research outputs found

    Scene Segmentation-Based Luminance Adjustment for Multi-Exposure Image Fusion

    Full text link
    We propose a novel method for adjusting luminance for multi-exposure image fusion. For the adjustment, two novel scene segmentation approaches based on luminance distribution are also proposed. Multi-exposure image fusion is a method for producing images that are expected to be more informative and perceptually appealing than any of the input ones, by directly fusing photos taken with different exposures. However, existing fusion methods often produce unclear fused images when input images do not have a sufficient number of different exposure levels. In this paper, we point out that adjusting the luminance of input images makes it possible to improve the quality of the final fused images. This insight is the basis of the proposed method. The proposed method enables us to produce high-quality images, even when undesirable inputs are given. Visual comparison results show that the proposed method can produce images that clearly represent a whole scene. In addition, multi-exposure image fusion with the proposed method outperforms state-of-the-art fusion methods in terms of MEF-SSIM, discrete entropy, tone mapped image quality index, and statistical naturalness.Comment: will be published in IEEE Transactions on Image Processin

    Automatic Exposure Compensation for Multi-Exposure Image Fusion

    Full text link
    This paper proposes a novel luminance adjustment method based on automatic exposure compensation for multi-exposure image fusion. Multi-exposure image fusion is a method to produce images without saturation regions, by using photos with different exposures. In conventional works, it has been pointed out that the quality of those multi-exposure images can be improved by adjusting the luminance of them. However, how to determine the degree of adjustment has never been discussed. This paper therefore proposes a way to automatically determines the degree on the basis of the luminance distribution of input multi-exposure images. Moreover, new weights, called "simple weights", for image fusion are also considered for the proposed luminance adjustment method. Experimental results show that the multi-exposure images adjusted by the proposed method have better quality than the input multi-exposure ones in terms of the well-exposedness. It is also confirmed that the proposed simple weights provide the highest score of statistical naturalness and discrete entropy in all fusion methods.Comment: To appear in Proc. ICIP2018 October 07-10, 2018, Athens, Greec

    Multi-Exposure Image Fusion Based on Exposure Compensation

    Full text link
    This paper proposes a novel multi-exposure image fusion method based on exposure compensation. Multi-exposure image fusion is a method to produce images without color saturation regions, by using photos with different exposures. However, in conventional works, it is unclear how to determine appropriate exposure values, and moreover, it is difficult to set appropriate exposure values at the time of photographing due to time constraints. In the proposed method, the luminance of the input multi-exposure images is adjusted on the basis of the relationship between exposure values and pixel values, where the relationship is obtained by assuming that a digital camera has a linear response function. The use of a local contrast enhancement method is also considered to improve input multi-exposure images. The compensated images are finally combined by one of existing multi-exposure image fusion methods. In some experiments, the effectiveness of the proposed method are evaluated in terms of the tone mapped image quality index, statistical naturalness, and discrete entropy, by comparing the proposed one with conventional ones.Comment: in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, pp.1388-1392, Calgary, Alberta, Canada, 19th April, 2018. arXiv admin note: substantial text overlap with arXiv:1805.1121

    A Pseudo Multi-Exposure Fusion Method Using Single Image

    Full text link
    This paper proposes a novel pseudo multi-exposure image fusion method based on a single image. Multi-exposure image fusion is used to produce images without saturation regions, by using photos with different exposures. However, it is difficult to take photos suited for the multi-exposure image fusion when we take a photo of dynamic scenes or record a video. In addition, the multi-exposure image fusion cannot be applied to existing images with a single exposure or videos. The proposed method enables us to produce pseudo multi-exposure images from a single image. To produce multi-exposure images, the proposed method utilizes the relationship between the exposure values and pixel values, which is obtained by assuming that a digital camera has a linear response function. Moreover, it is shown that the use of a local contrast enhancement method allows us to produce pseudo multi-exposure images with higher quality. Most of conventional multi-exposure image fusion methods are also applicable to the proposed multi-exposure images. Experimental results show the effectiveness of the proposed method by comparing the proposed one with conventional ones.Comment: To appear in IEICE Trans. Fundamentals, vol.E101-A, no.11, November 201

    Fast and Efficient Zero-Learning Image Fusion

    Full text link
    We propose a real-time image fusion method using pre-trained neural networks. Our method generates a single image containing features from multiple sources. We first decompose images into a base layer representing large scale intensity variations, and a detail layer containing small scale changes. We use visual saliency to fuse the base layers, and deep feature maps extracted from a pre-trained neural network to fuse the detail layers. We conduct ablation studies to analyze our method's parameters such as decomposition filters, weight construction methods, and network depth and architecture. Then, we validate its effectiveness and speed on thermal, medical, and multi-focus fusion. We also apply it to multiple image inputs such as multi-exposure sequences. The experimental results demonstrate that our technique achieves state-of-the-art performance in visual quality, objective assessment, and runtime efficiency.Comment: 13 pages, 10 figure

    Robust Depth Estimation from Auto Bracketed Images

    Full text link
    As demand for advanced photographic applications on hand-held devices grows, these electronics require the capture of high quality depth. However, under low-light conditions, most devices still suffer from low imaging quality and inaccurate depth acquisition. To address the problem, we present a robust depth estimation method from a short burst shot with varied intensity (i.e., Auto Bracketing) or strong noise (i.e., High ISO). We introduce a geometric transformation between flow and depth tailored for burst images, enabling our learning-based multi-view stereo matching to be performed effectively. We then describe our depth estimation pipeline that incorporates the geometric transformation into our residual-flow network. It allows our framework to produce an accurate depth map even with a bracketed image sequence. We demonstrate that our method outperforms state-of-the-art methods for various datasets captured by a smartphone and a DSLR camera. Moreover, we show that the estimated depth is applicable for image quality enhancement and photographic editing.Comment: To appear in CVPR 2018. Total 9 page

    Exposure Interpolation by Combining Model-driven and Data-driven Methods

    Full text link
    Deep learning based methods have penetrated many image processing problems and become dominant solutions to these problems. A natural question raised here is "Is there any space for conventional methods on these problems?" In this paper, exposure interpolation is taken as an example to answer this question and the answer is "Yes". A framework on fusing conventional and deep learning method is introduced to generate an medium exposure image for two large-exposureratio images. Experimental results indicate that the quality of the medium exposure image is increased significantly through using the deep learning method to refine the interpolated image via the conventional method. The conventional method can be adopted to improve the convergence speed of the deep learning method and to reduce the number of samples which is required by the deep learning method.Comment: 10 page

    Temporal Image Fusion

    Full text link
    This paper introduces temporal image fusion. The proposed technique builds upon previous research in exposure fusion and expands it to deal with the limited Temporal Dynamic Range of existing sensors and camera technologies. In particular, temporal image fusion enables the rendering of long-exposure effects on full frame-rate video, as well as the generation of arbitrarily long exposures from a sequence of images of the same scene taken over time. We explore the problem of temporal under-exposure, and show how it can be addressed by selectively enhancing dynamic structure. Finally, we show that the use of temporal image fusion together with content-selective image filters can produce a range of striking visual effects on a given input sequence

    Removing Camera Shake via Weighted Fourier Burst Accumulation

    Full text link
    Numerous recent approaches attempt to remove image blur due to camera shake, either with one or multiple input images, by explicitly solving an inverse and inherently ill-posed deconvolution problem. If the photographer takes a burst of images, a modality available in virtually all modern digital cameras, we show that it is possible to combine them to get a clean sharp version. This is done without explicitly solving any blur estimation and subsequent inverse problem. The proposed algorithm is strikingly simple: it performs a weighted average in the Fourier domain, with weights depending on the Fourier spectrum magnitude. The method can be seen as a generalization of the align and average procedure, with a weighted average, motivated by hand-shake physiology and theoretically supported, taking place in the Fourier domain. The method's rationale is that camera shake has a random nature and therefore each image in the burst is generally blurred differently. Experiments with real camera data, and extensive comparisons, show that the proposed Fourier Burst Accumulation (FBA) algorithm achieves state-of-the-art results an order of magnitude faster, with simplicity for on-board implementation on camera phones. Finally, we also present experiments in real high dynamic range (HDR) scenes, showing how the method can be straightforwardly extended to HDR photography.Comment: Errata with respect to published version: Algorithm 1, lines 9 and 10: w_i is replaced by w^p_i (as was correctly stated in Eq (9)

    Generation of High Dynamic Range Illumination from a Single Image for the Enhancement of Undesirably Illuminated Images

    Full text link
    This paper presents an algorithm that enhances undesirably illuminated images by generating and fusing multi-level illuminations from a single image.The input image is first decomposed into illumination and reflectance components by using an edge-preserving smoothing filter. Then the reflectance component is scaled up to improve the image details in bright areas. The illumination component is scaled up and down to generate several illumination images that correspond to certain camera exposure values different from the original. The virtual multi-exposure illuminations are blended into an enhanced illumination, where we also propose a method to generate appropriate weight maps for the tone fusion. Finally, an enhanced image is obtained by multiplying the equalized illumination and enhanced reflectance. Experiments show that the proposed algorithm produces visually pleasing output and also yields comparable objective results to the conventional enhancement methods, while requiring modest computational loads
    corecore