11 research outputs found

    Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    Get PDF
    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences

    Structural similarity loss for learning to fuse multi-focus images

    Get PDF
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. Convolutional neural networks have recently been used for multi-focus image fusion. However, some existing methods have resorted to adding Gaussian blur to focused images, to simulate defocus, thereby generating data (with ground-truth) for supervised learning. Moreover, they classify pixels as ‘focused’ or ‘defocused’, and use the classified results to construct the fusion weight maps. This then necessitates a series of post-processing steps. In this paper, we present an end-to-end learning approach for directly predicting the fully focused output image from multi-focus input image pairs. The suggested approach uses a CNN architecture trained to perform fusion, without the need for ground truth fused images. The CNN exploits the image structural similarity (SSIM) to calculate the loss, a metric that is widely accepted for fused image quality evaluation. What is more, we also use the standard deviation of a local window of the image to automatically estimate the importance of the source images in the final fused image when designing the loss function. Our network can accept images of variable sizes and hence, we are able to utilize real benchmark datasets, instead of simulated ones, to train our network. The model is a feed-forward, fully convolutional neural network that can process images of variable sizes during test time. Extensive evaluation on benchmark datasets show that our method outperforms, or is comparable with, existing state-of-the-art techniques on both objective and subjective benchmarks

    Point-and-Shoot All-in-Focus Photo Synthesis from Smartphone Camera Pair

    Full text link
    All-in-Focus (AIF) photography is expected to be a commercial selling point for modern smartphones. Standard AIF synthesis requires manual, time-consuming operations such as focal stack compositing, which is unfriendly to ordinary people. To achieve point-and-shoot AIF photography with a smartphone, we expect that an AIF photo can be generated from one shot of the scene, instead of from multiple photos captured by the same camera. Benefiting from the multi-camera module in modern smartphones, we introduce a new task of AIF synthesis from main (wide) and ultra-wide cameras. The goal is to recover sharp details from defocused regions in the main-camera photo with the help of the ultra-wide-camera one. The camera setting poses new challenges such as parallax-induced occlusions and inconsistent color between cameras. To overcome the challenges, we introduce a predict-and-refine network to mitigate occlusions and propose dynamic frequency-domain alignment for color correction. To enable effective training and evaluation, we also build an AIF dataset with 2686 unique scenes. Each scene includes two photos captured by the main camera, one photo captured by the ultrawide camera, and a synthesized AIF photo. Results show that our solution, termed EasyAIF, can produce high-quality AIF photos and outperforms strong baselines quantitatively and qualitatively. For the first time, we demonstrate point-and-shoot AIF photo synthesis successfully from main and ultra-wide cameras.Comment: Early Access by IEEE Transactions on Circuits and Systems for Video Technology 202

    Tools and Methods for the Registration and Fusion of Remotely Sensed Data

    Get PDF
    Tools and methods for image registration were reviewed. Methods for the registration of remotely sensed data at NASA were discussed. Image fusion techniques were reviewed. Challenges in registration of remotely sensed data were discussed. Examples of image registration and image fusion were given

    Detail and contrast enhancement in images using dithering and fusion

    Get PDF
    This thesis focuses on two applications of wavelet transforms to achieve image enhancement. One of the applications is image fusion and the other one is image dithering. Firstly, to improve the quality of a fused image, an image fusion technique based on transform domain has been proposed as a part of this research. The proposed fusion technique has also been extended to reduce temporal redundancy associated with the processing. Experimental results show better performance of the proposed methods over other methods. In addition, achievements have been made in terms of enhancing image contrast, capturing more image details and efficiency in processing time when compared to existing methods. Secondly, of all the present image dithering methods, error diffusion-based dithering is the most widely used and explored. Error diffusion, despite its great success, has been lacking in image enhancement aspects because of the softening effects caused by this method. To compensate for the softening effects, wavelet-based dithering was introduced. Although wavelet-based dithering worked well in removing the softening effects, as the method is based on discrete wavelet transform, it lacked in aspects like poor directionality and shift invariance, which are responsible for making the resultant images look sharp and crisp. Hence, a new method named complex wavelet-based dithering has been introduced as part of this research to compensate for the softening effects. Image processed by the proposed method emphasises more on details and exhibits better contrast characteristics in comparison to the existing methods

    Комплексування інформації в багатоканальних оптико-електронних системах спостереження

    Get PDF
    Розглянуто методи підвищення ефективності функціонування оптико-електронних систем візуального спостереження шляхом поєднання інформації з кількох спектральних каналів. Запропоновано нові методи обробки зображень та нові методи оцінки якості комплектованих зображень. Наведено практичні результати застосування розроблених методів. Для наукових та інженерно-технічних працівників, студентів напряму підготовки 6.051004 «Оптотехніка»
    corecore