4,119 research outputs found

    Extracting individual contributions from their mixture: a blind source separation approach, with examples from space and laboratory plasmas

    Full text link
    Multipoint or multichannel observations in plasmas can frequently be modelled as an instantaneous mixture of contributions (waves, emissions, ...) of different origins. Recovering the individual sources from their mixture then becomes one of the key objectives. However, unless the underlying mixing processes are well known, these situations lead to heavily underdetermined problems. Blind source separation aims at disentangling such mixtures with the least possible prior information on the sources and their mixing processes. Several powerful approaches have recently been developed, which can often provide new or deeper insight into the underlying physics. This tutorial paper briefly discusses some possible applications of blind source separation to the field of plasma physics, in which this concept is still barely known. Two examples are given. The first one shows how concurrent processes in the dynamical response of the electron temperature in a tokamak can be separated. The second example deals with solar spectral imaging in the Extreme UV and shows how empirical temperature maps can be built.Comment: expanded version of an article to appear in Contributions to Plasma Physics (2010

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train

    Adaptive chebyshev fusion of vegetation imagery based on SVM classifier

    Get PDF
    A novel adaptive image fusion method by using Chebyshev polynomial analysis (CPA), for applications in vegetation satellite imagery, is introduced in this paper. Fusion is a technique that enables the merging of two satellite cameras: panchromatic and multi-spectral, to produce higher quality satellite images to address agricurtural and vegetation issues such as soiling, floods and crop harvesting. Recent studies show Chebyshev polynomials to be effective in image fusion mainly in medium to high noise conditions, as per real-life satellite conditions. However, its application was limited to heuristics. In this research, we have proposed a way to adaptively select the optimal CPA parameters according to user specifications. Support vector machines (SVM) is used as a classifying tool to estimate the noise parameters, from which the appropriate CPA degree is utilised to perform image fusion according to a look-up table. Performance evaluation affirms the approach’s ability in reducing the computational complexity to perform fusion. Overall, adaptive CPA fusion is able to optimize an image fusion system’s resources and processing time. It therefore may be suitably incorporated onto real hardware for use on vegetation satellite imagery

    Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

    Get PDF
    This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation abilit

    REMOTE SENSING IMAGE FUSION USING ICA AND OPTIMIZED WAVELET TRANSFORM

    Get PDF
    corecore