1,034 research outputs found

    Improving fusion of surveillance images in sensor networks using independent component analysis

    Get PDF

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    A Hybrid Chebyshev-ICA Image Fusion Method Based on Regional Saliency

    Get PDF
    An image fusion method that performs robustly for image sets heavily corrupted by noise is presented in this paper. The approach combines the advantages of two state-of-the-art fusion techniques, namely Independent Component Analysis (ICA) and Chebyshev Poly-nomial Analysis (CPA) fusion. Fusion using ICA performs well in transferring the salient features of the input images into the composite output, but its performance deteriorates severely under mild to moderate noise conditions. CPA fusion is robust under severe noise conditions, but eliminates the high frequency information of the images involved. We pro-pose to use ICA fusion within high activity image areas, identified by edges and strong textured surfaces and CPA fusion in low activity areas identified by uniform background regions and weak texture. A binary image map is used for selecting the appropriate method, which is constructed by a standard edge detector followed by morphological operators. The results of the proposed approach are very encouraging as far as joint fusion and denoising is concerned. The works presented may prove beneficial for future image fusion tasks in real world applications such as surveillance, where noise is heavily present

    Region-based multimodal image fusion using ICA bases

    Get PDF

    Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

    Get PDF
    This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation abilit

    Extracting individual contributions from their mixture: a blind source separation approach, with examples from space and laboratory plasmas

    Full text link
    Multipoint or multichannel observations in plasmas can frequently be modelled as an instantaneous mixture of contributions (waves, emissions, ...) of different origins. Recovering the individual sources from their mixture then becomes one of the key objectives. However, unless the underlying mixing processes are well known, these situations lead to heavily underdetermined problems. Blind source separation aims at disentangling such mixtures with the least possible prior information on the sources and their mixing processes. Several powerful approaches have recently been developed, which can often provide new or deeper insight into the underlying physics. This tutorial paper briefly discusses some possible applications of blind source separation to the field of plasma physics, in which this concept is still barely known. Two examples are given. The first one shows how concurrent processes in the dynamical response of the electron temperature in a tokamak can be separated. The second example deals with solar spectral imaging in the Extreme UV and shows how empirical temperature maps can be built.Comment: expanded version of an article to appear in Contributions to Plasma Physics (2010

    Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis

    Full text link
    The availability of large-scale annotated image datasets and recent advances in supervised deep learning methods enable the end-to-end derivation of representative image features that can impact a variety of image analysis problems. Such supervised approaches, however, are difficult to implement in the medical domain where large volumes of labelled data are difficult to obtain due to the complexity of manual annotation and inter- and intra-observer variability in label assignment. We propose a new convolutional sparse kernel network (CSKN), which is a hierarchical unsupervised feature learning framework that addresses the challenge of learning representative visual features in medical image analysis domains where there is a lack of annotated training data. Our framework has three contributions: (i) We extend kernel learning to identify and represent invariant features across image sub-patches in an unsupervised manner. (ii) We initialise our kernel learning with a layer-wise pre-training scheme that leverages the sparsity inherent in medical images to extract initial discriminative features. (iii) We adapt a multi-scale spatial pyramid pooling (SPP) framework to capture subtle geometric differences between learned visual features. We evaluated our framework in medical image retrieval and classification on three public datasets. Our results show that our CSKN had better accuracy when compared to other conventional unsupervised methods and comparable accuracy to methods that used state-of-the-art supervised convolutional neural networks (CNNs). Our findings indicate that our unsupervised CSKN provides an opportunity to leverage unannotated big data in medical imaging repositories.Comment: Accepted by Medical Image Analysis (with a new title 'Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis'). The manuscript is available from following link (https://doi.org/10.1016/j.media.2019.06.005

    Novel Approach for Diagnosis of Brain Diseases byUsing Mixed Scheme on MRI and CT Images

    Get PDF
    Now days, multimodal medical image has growing interest in the field of analysis and diagnosis of brain diseases. In order to obtain complementary information from multimodal input images, multimodal image fusion become widely popular. Here fusion of the input multimodal images is done either by Spatial Domain or by Transform Domain method. Limitations of Spatial domain method force us to use transform domain fusion method. Discrete Wavelet Transform is one of the popular real valued wavelet transform method of transform domain fusion, but it has disadvantages like shift sensitivity and lack of phase information. These disadvantages motivate us to use the complex Wavelet Transform. In the present work we prefer New Daubechies Complex Wavelet Transform (DCxWT) Method for multimodal image fusion.shift invariance and availability of phase information properties of DCxWT create an output fused image of greater quality. In this work we apply two separate image fusion rule for approximation and detailed coefficient

    Region-based multimodal image fusion using ICA bases

    Get PDF
    • …
    corecore