118 research outputs found

    Spatial Stimuli Gradient Based Multifocus Image Fusion Using Multiple Sized Kernels

    Get PDF
    Multi-focus image fusion technique extracts the focused areas from all the source images and combines them into a new image which contains all focused objects. This paper proposes a spatial domain fusion scheme for multi-focus images by using multiple size kernels. Firstly, source images are pre-processed with a contrast enhancement step and then the soft and hard decision maps are generated by employing a sliding window technique using multiple sized kernels on the gradient images. Hard decision map selects the accurate focus information from the source images, whereas, the soft decision map selects the basic focus information and contains minimum falsely detected focused/unfocused regions. These decision maps are further processed to compute the final focus map. Gradient images are constructed through state-ofthe-art edge detection technique, spatial stimuli gradient sketch model, which computes the local stimuli from perceived brightness and hence enhances the essential structural and edge information. Detailed experiment results demonstrate that the proposed multi-focus image fusion algorithm performs better than the other well known state-of-the-art multifocus image fusion methods, in terms of subjective visual perception and objective quality evaluation metrics

    A New Robust Multi focus image fusion Method

    Get PDF
    In today's digital era, multi focus picture fusion is a critical problem in the field of computational image processing. In the field of fusion information, multi-focus picture fusion has emerged as a significant research subject. The primary objective of multi focus image fusion is to merge graphical information from several images with various focus points into a single image with no information loss. We provide a robust image fusion method that can combine two or more degraded input photos into a single clear resulting output image with additional detailed information about the fused input images. The targeted item from each of the input photographs is combined to create a secondary image output. The action level quantities and the fusion rule are two key components of picture fusion, as is widely acknowledged. The activity level values are essentially implemented in either the "spatial domain" or the "transform domain" in most common fusion methods, such as wavelet. The brightness information computed from various source photos is compared to the laws developed to produce brightness / focus maps by using local filters to extract high-frequency characteristics. As a result, the focus map provides integrated clarity information, which is useful for a variety of Multi focus picture fusion problems. Image fusion with several modalities, for example. Completing these two jobs, on the other hand. As a consequence, we offer a strategy for achieving good fusion performance in this study paper. A Convolutional Neural Network (CNN) was trained on both high-quality and blurred picture patches to represent the mapping. The main advantage of this idea is that it can create a CNN model that can provide both the Activity level Measurement" and the Fusion rule, overcoming the limitations of previous fusion procedures. Multi focus image fusion is demonstrated using microscopic images, medical imaging, computer visualization, and Image information improvement is also a benefit of multi-focus image fusion. Greater precision is necessary in terms of target detection and identification. Face recognition" and a more compact work load, as well as enhanced system consistency, are among the new features

    Multi-focus Image Fusion with Sparse Feature Based Pulse Coupled Neural Network

    Get PDF
    In order to better extract the focused regions and effectively improve the quality of the fused image, a novel multi-focus image fusion scheme with sparse feature based pulse coupled neural network (PCNN) is proposed. The registered source images are decomposed into principal matrices and sparse matrices by robust principal component analysis (RPCA). The salient features of the sparse matrices construct the sparse feature space of the source images. The sparse features are used to motivate the PCNN neurons. The focused regions of the source images are detected by the output of the PCNN and integrated to construct the final fused image. Experimental results show that the proposed scheme works better in extracting the focused regions and improving the fusion quality compared to the other existing fusion methods in both spatial and transform domain

    Real-time Video Fusion for Surveillance Applications

    Get PDF
    A área da vigilância está cada vez mais presente nas nossas vidas. Desde em ambientes urbanos para prevenção, detecção e resolução de crimes, prevenção de vandalismo e controlo de tráfego rodoviário até em ambientes mais remotos como é o caso de aplicações militares para identificação e localização de forças inimigas. Com a evolução da tecnologia a área da vigilância está também a ficar mais sofisticada, os sistemas cada vez têm melhor qualidade, são mais seguros, os custos são mais baixos, há uma maior escalabilidade de sistemas e uma melhor integração entre vários tipos de sistemas de vigilância diferentes. Um dos principais tipos de vigilância é a vídeo-vigilância. Como o nome indica, esta técnica consiste na constante captura de imagens de forma a obter uma sequência dos acontecimentos num determinado local. No entanto, uma das desvantagens deste sistema é a dependência das condições de visibilidade no local. No ambiente desta dissertação foi desenvolvido um sistema em tempo real que seja capaz de recolher imagens com informação útil mesmo em condições de pouca visibilidade, isto inclui, por exemplo, ambientes nocturnos, de nevoeiro ou com fumo. Para tal foi usada a técnica de fusão de imagens, neste caso entre uma imagem no espectro infravermelho e uma imagem no espectro visível. A recolha complementar de imagens no espectro infravermelho vai introduzir mais informação sobre o ambiente, nomeadamente acerca da temperatura. Esta informação extra vai depois ser fundida com a imagem do espectro visível de forma a gerar uma só imagem que contém as informações de ambas as imagens visível e infravermelha.Surveillance is becoming a really important element in our daily lives. From urban environments as crime prevention, detection and resolution, vandalism prevention and traffic flow control to more remote environments such as military applications, for instance, to identify and locate enemy forces. As technology develops, the surveillance subject is also getting more sophisticated. The systems are improving quality wise, are getting safer, costs are getting lower, there is higher scalability of systems and there is better integration between different types of surveillance systems. One of the main types of surveillance is known as video surveillance. As the name states, this technique consists of a constant capture of images in order to obtain a sequence of events happening in a given location. However, one of the main disadvantages of these systems is the dependency on the visibility conditions available in the location. In the scope of this dissertation a real-time system capable of capturing images containing useful information even in low visibility conditions, such as nighttime, fog or smoke, was developed. For this purpose, a technique known as image fusion was used. In this case, a fusion between an image contained in the infrared spectrum and another contained in the visible spectrum. Sensing a complementary image of the environment in the infrared spectrum will provide extra information, such as the temperature. This extra information will then be fused with the visible spectrum image, generating just one image containing the information from both the visible and infrared images

    A Novel Multi-Focus Image Fusion Method Based on Stochastic Coordinate Coding and Local Density Peaks Clustering

    Get PDF
    abstract: The multi-focus image fusion method is used in image processing to generate all-focus images that have large depth of field (DOF) based on original multi-focus images. Different approaches have been used in the spatial and transform domain to fuse multi-focus images. As one of the most popular image processing methods, dictionary-learning-based spare representation achieves great performance in multi-focus image fusion. Most of the existing dictionary-learning-based multi-focus image fusion methods directly use the whole source images for dictionary learning. However, it incurs a high error rate and high computation cost in dictionary learning process by using the whole source images. This paper proposes a novel stochastic coordinate coding-based image fusion framework integrated with local density peaks. The proposed multi-focus image fusion method consists of three steps. First, source images are split into small image patches, then the split image patches are classified into a few groups by local density peaks clustering. Next, the grouped image patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP) algorithm is used to carry out sparse representation. After the three steps, the obtained sparse coefficients are fused following the max L1-norm rule. The fused coefficients are inversely transformed to an image by using the learned dictionary. The results and analyses of comparison experiments demonstrate that fused images of the proposed method have higher qualities than existing state-of-the-art methods

    Metallographic Image Fusion

    Get PDF
    Image processing plays important role in manufacturing, aerospace, biomedical fields. To determine the classification of metallic sample, edge structure and images without blur are required. Instead of finding the noise kernel blur section of images can be removed by using multiple images fusion. There are different methods used for image fusions like average method, maxima, wavelet transform. For image fusion discrete wavelet transform is used. Image fusion improves the quality of image, data content. In this paper three images are used to fuse together. This images having standard size of 640x480 pixels. Image fusion improves the quality so that edge structure can be determined. According to edge structure the classification is done using ASTME standards

    Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    Get PDF
    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences

    Fast filtering image fusion

    Full text link
    © 2017 SPIE and IS & T. Image fusion aims at exploiting complementary information in multimodal images to create a single composite image with extended information content. An image fusion framework is proposed for different types of multimodal images with fast filtering in the spatial domain. First, image gradient magnitude is used to detect contrast and image sharpness. Second, a fast morphological closing operation is performed on image gradient magnitude to bridge gaps and fill holes. Third, the weight map is obtained from the multimodal image gradient magnitude and is filtered by a fast structure-preserving filter. Finally, the fused image is composed by using a weighed-sum rule. Experimental results on several groups of images show that the proposed fast fusion method has a better performance than the state-of-the-art methods, running up to four times faster than the fastest baseline algorithm
    corecore