98 research outputs found

    Novel Approaches for Regional Multifocus Image Fusion

    Get PDF
    Image fusion is a research topic about combining information from multiple images into one fused image. Although a large number of methods have been proposed, many challenges remain in obtaining clearer resulting images with higher quality. This chapter addresses the multifocus image fusion problem about extending the depth of field by fusing several images of the same scene with different focuses. Existing research in multifocus image fusion tends to emphasis on the pixel-level image fusion using transform domain methods. The region-level image fusion methods, especially the ones using new coding techniques, are still limited. In this chapter, we provide an overview of regional multi-focus image fusion, and two different orthogonal matching pursuit-based sparse representation methods are adopted for regional multi-focus image fusion. Experiment results show that the regional image fusion using sparse representation can achieve a comparable even better performance for multifocus image fusion problems

    IMAGE FUSION FOR MULTIFOCUS IMAGES USING SPEEDUP ROBUST FEATURES

    Get PDF
    The multi-focus image fusion technique has emerged as major topic in image processing in order to generate all focus images with increased depth of field from multi focus photographs. Image fusion is the process of combining relevant information from two or more images into a single image. The image registration technique includes the entropy theory. Speed up Robust Features (SURF), feature detector and Binary Robust Invariant Scalable Key points (BRISK) feature descriptor is used in feature matching process. An improved RANDOM Sample Consensus (RANSAC) algorithm is adopted to reject incorrect matches. The registered images are fused using stationary wavelet transform (SWT).The experimental results prove that the proposed algorithm achieves better performance for unregistered multiple multi-focus images and it especially robust to scale and rotation translation compared with traditional direct fusion method.  Â

    Multi-focus Image Fusion with Sparse Feature Based Pulse Coupled Neural Network

    Get PDF
    In order to better extract the focused regions and effectively improve the quality of the fused image, a novel multi-focus image fusion scheme with sparse feature based pulse coupled neural network (PCNN) is proposed. The registered source images are decomposed into principal matrices and sparse matrices by robust principal component analysis (RPCA). The salient features of the sparse matrices construct the sparse feature space of the source images. The sparse features are used to motivate the PCNN neurons. The focused regions of the source images are detected by the output of the PCNN and integrated to construct the final fused image. Experimental results show that the proposed scheme works better in extracting the focused regions and improving the fusion quality compared to the other existing fusion methods in both spatial and transform domain

    Fast filtering image fusion

    Full text link
    © 2017 SPIE and IS & T. Image fusion aims at exploiting complementary information in multimodal images to create a single composite image with extended information content. An image fusion framework is proposed for different types of multimodal images with fast filtering in the spatial domain. First, image gradient magnitude is used to detect contrast and image sharpness. Second, a fast morphological closing operation is performed on image gradient magnitude to bridge gaps and fill holes. Third, the weight map is obtained from the multimodal image gradient magnitude and is filtered by a fast structure-preserving filter. Finally, the fused image is composed by using a weighed-sum rule. Experimental results on several groups of images show that the proposed fast fusion method has a better performance than the state-of-the-art methods, running up to four times faster than the fastest baseline algorithm

    A New Robust Multi focus image fusion Method

    Get PDF
    In today's digital era, multi focus picture fusion is a critical problem in the field of computational image processing. In the field of fusion information, multi-focus picture fusion has emerged as a significant research subject. The primary objective of multi focus image fusion is to merge graphical information from several images with various focus points into a single image with no information loss. We provide a robust image fusion method that can combine two or more degraded input photos into a single clear resulting output image with additional detailed information about the fused input images. The targeted item from each of the input photographs is combined to create a secondary image output. The action level quantities and the fusion rule are two key components of picture fusion, as is widely acknowledged. The activity level values are essentially implemented in either the "spatial domain" or the "transform domain" in most common fusion methods, such as wavelet. The brightness information computed from various source photos is compared to the laws developed to produce brightness / focus maps by using local filters to extract high-frequency characteristics. As a result, the focus map provides integrated clarity information, which is useful for a variety of Multi focus picture fusion problems. Image fusion with several modalities, for example. Completing these two jobs, on the other hand. As a consequence, we offer a strategy for achieving good fusion performance in this study paper. A Convolutional Neural Network (CNN) was trained on both high-quality and blurred picture patches to represent the mapping. The main advantage of this idea is that it can create a CNN model that can provide both the Activity level Measurement" and the Fusion rule, overcoming the limitations of previous fusion procedures. Multi focus image fusion is demonstrated using microscopic images, medical imaging, computer visualization, and Image information improvement is also a benefit of multi-focus image fusion. Greater precision is necessary in terms of target detection and identification. Face recognition" and a more compact work load, as well as enhanced system consistency, are among the new features
    • …
    corecore