6,163 research outputs found

    Subjectively optimised multi-exposure and multi-focus image fusion with compensation for camera shake

    Get PDF
    Multi-exposure image fusion algorithms are used for enhancing the perceptual quality of an image captured by sensors of limited dynamic range. This is achieved by rendering a single scene based on multiple images captured at different exposure times. Similarly, multi-focus image fusion is used when the limited depth of focus on a selected focus setting of a camera results in parts of an image being out of focus. The solution adopted is to fuse together a number of multi-focus images to create an image that is focused throughout. In this paper we propose a single algorithm that can perform both multi-focus and multi-exposure image fusion. This algorithm is a novel approach in which a set of unregistered multiexposure/focus images is first registered before being fused. The registration of images is done via identifying matching key points in constituent images using Scale Invariant Feature Transforms (SIFT). The RANdom SAmple Consensus (RANSAC) algorithm is used to identify inliers of SIFT key points removing outliers that can cause errors in the registration process. Finally we use the Coherent Point Drift algorithm to register the images, preparing them to be fused in the subsequent fusion stage. For the fusion of images, a novel approach based on an improved version of a Wavelet Based Contourlet Transform (WBCT) is used. The experimental results as follows prove that the proposed algorithm is capable of producing HDR, or multi-focus images by registering and fusing a set of multi-exposure or multi-focus images taken in the presence of camera shake

    Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    Get PDF
    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences

    Subjectively optimised multi-exposure and multi-focus image fusion with compensation for camera shake

    Get PDF
    One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.Multi-exposure image fusion algorithms are used for enhancing the perceptual quality of an image captured by sensors of limited dynamic range. This is achieved by rendering a single scene based on multiple images captured at different exposure times. Similarly, multi-focus image fusion is used when the limited depth of focus on a selected focus setting of a camera results in parts of an image being out of focus. The solution adopted is to fuse together a number of multi-focus images to create an image that is focused throughout. In this paper we propose a single algorithm that can perform both multi-focus and multi-exposure image fusion. This algorithm is a novel approach in which a set of unregistered multiexposure/focus images is first registered before being fused. The registration of images is done via identifying matching key points in constituent images using Scale Invariant Feature Transforms (SIFT). The RANdom SAmple Consensus (RANSAC) algorithm is used to identify inliers of SIFT key points removing outliers that can cause errors in the registration process. Finally we use the Coherent Point Drift algorithm to register the images, preparing them to be fused in the subsequent fusion stage. For the fusion of images, a novel approach based on an improved version of a Wavelet Based Contourlet Transform (WBCT) is used. The experimental results as follows prove that the proposed algorithm is capable of producing HDR, or multi-focus images by registering and fusing a set of multi-exposure or multi-focus images taken in the presence of camera shake

    Adaptive foveated single-pixel imaging with dynamic super-sampling

    Get PDF
    As an alternative to conventional multi-pixel cameras, single-pixel cameras enable images to be recorded using a single detector that measures the correlations between the scene and a set of patterns. However, to fully sample a scene in this way requires at least the same number of correlation measurements as there are pixels in the reconstructed image. Therefore single-pixel imaging systems typically exhibit low frame-rates. To mitigate this, a range of compressive sensing techniques have been developed which rely on a priori knowledge of the scene to reconstruct images from an under-sampled set of measurements. In this work we take a different approach and adopt a strategy inspired by the foveated vision systems found in the animal kingdom - a framework that exploits the spatio-temporal redundancy present in many dynamic scenes. In our single-pixel imaging system a high-resolution foveal region follows motion within the scene, but unlike a simple zoom, every frame delivers new spatial information from across the entire field-of-view. Using this approach we demonstrate a four-fold reduction in the time taken to record the detail of rapidly evolving features, whilst simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This tiered super-sampling technique enables the reconstruction of video streams in which both the resolution and the effective exposure-time spatially vary and adapt dynamically in response to the evolution of the scene. The methods described here can complement existing compressive sensing approaches and may be applied to enhance a variety of computational imagers that rely on sequential correlation measurements.Comment: 13 pages, 5 figure

    Infrared face recognition: a comprehensive review of methodologies and databases

    Full text link
    Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are: (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition, (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies, (iii) a description of the main databases of infrared facial images available to the researcher, and lastly (iv) a discussion of the most promising avenues for future research.Comment: Pattern Recognition, 2014. arXiv admin note: substantial text overlap with arXiv:1306.160
    • …
    corecore