10 research outputs found

    Nonlinear blind mixture identification using local source sparsity and functional data clustering

    Get PDF
    International audienceIn this paper we propose several methods, using the same structure but with different criteria, for estimating the nonlinearities in nonlinear source separation. In particular and contrary to the state-of-art methods, our proposed approach uses a weak joint-sparsity sources assumption: we look for tiny temporal zones where only one source is active. This method is well suited to non-stationary signals such as speech. We extend our previous work to a more general class of nonlinear mixtures, proposing several nonlinear single-source confidence measures and several functional clustering techniques. Such approaches may be seen as extensions of linear instantaneous sparse component analysis to nonlinear mixtures. Experiments demonstrate the effectiveness and relevancy of this approach

    Post-nonlinear speech mixture identification using single-source temporal zones & curve clustering

    Get PDF
    International audienceIn this paper, we propose a method for estimating the nonlinearities which hold in post-nonlinear source separation. In particular and contrary to the state-of-art methods, our proposed approach uses a weak joint-sparsity sources assumption: we look for tiny temporal zones where only one source is active. This method is well suited to non-stationary signals such as speech. The main novelty of our work consists of using nonlinear single-source confidence measures and curve clustering. Such an approach may be seen as an extension of linear instantaneous sparse component analysis to post-nonlinear mixtures. The performance of the approach is illustrated with some tests showing that the nonlinear functions are estimated accurately, with mean square errors around 4e-5 when the sources are " strongly" mixed

    Novel fast random search clustering approach for mixing matrix identification in MIMO linear blind inverse problems with sparse inputs

    Get PDF
    In this paper we propose a novel fast random search clustering (RSC) algorithm for mixing matrix identification in multiple input multiple output (MIMO) linear blind inverse problems with sparse inputs. The proposed approach is based on the clustering of the observations around the directions given by the columns of the mixing matrix that occurs typically for sparse inputs. Exploiting this fact, the RSC algorithm proceeds by parameterizing the mixing matrix using hyperspherical coordinates, randomly selecting candidate basis vectors (i.e. clustering directions) from the observations, and accepting or rejecting them according to a binary hypothesis test based on the Neyman–Pearson criterion. The RSC algorithm is not tailored to any specific distribution for the sources, can deal with an arbitrary number of inputs and outputs (thus solving the difficult under-determined problem), and is applicable to both instantaneous and convolutive mixtures. Extensive simulations for synthetic and real data with different number of inputs and outputs, data size, sparsity factors of the inputs and signal to noise ratios confirm the good performance of the proposed approach under moderate/high signal to noise ratios. RESUMEN. Método de separación ciega de fuentes para señales dispersas basado en la identificación de la matriz de mezcla mediante técnicas de "clustering" aleatorio

    Image Processing and Machine Learning for Hyperspectral Unmixing: An Overview and the HySUPP Python Package

    Full text link
    Spectral pixels are often a mixture of the pure spectra of the materials, called endmembers, due to the low spatial resolution of hyperspectral sensors, double scattering, and intimate mixtures of materials in the scenes. Unmixing estimates the fractional abundances of the endmembers within the pixel. Depending on the prior knowledge of endmembers, linear unmixing can be divided into three main groups: supervised, semi-supervised, and unsupervised (blind) linear unmixing. Advances in Image processing and machine learning substantially affected unmixing. This paper provides an overview of advanced and conventional unmixing approaches. Additionally, we draw a critical comparison between advanced and conventional techniques from the three categories. We compare the performance of the unmixing techniques on three simulated and two real datasets. The experimental results reveal the advantages of different unmixing categories for different unmixing scenarios. Moreover, we provide an open-source Python-based package available at https://github.com/BehnoodRasti/HySUPP to reproduce the results

    Group-structured and independent subspace based dictionary learning

    Get PDF
    Thanks to the several successful applications, sparse signal representation has become one of the most actively studied research areas in mathematics. However, in the traditional sparse coding problem the dictionary used for representation is assumed to be known. In spite of the popularity of sparsity and its recently emerged structured sparse extension, interestingly, very few works focused on the learning problem of dictionaries to these codes. In the first part of the paper, we develop a dictionary learning method which is (i) online, (ii) enables overlapping group structures with (iii) non-convex sparsity-inducing regularization and (iv) handles the partially observable case. To the best of our knowledge, current methods can exhibit two of these four desirable properties at most. We also investigate several interesting special cases of our framework and demonstrate its applicability in inpainting of natural signals, structured sparse non-negative matrix factorization of faces and collaborative filtering. Complementing the sparse direction we formulate a novel component-wise acting, epsilon-sparse coding scheme in reproducing kernel Hilbert spaces and show its equivalence to a generalized class of support vector machines. Moreover, we embed support vector machines to multilayer perceptrons and show that for this novel kernel based approximation approach the backpropagation procedure of multilayer perceptrons can be generalized. In the second part of the paper, we focus on dictionary learning making use of independent subspace assumption instead of structured sparsity. The corresponding problem is called independent subspace analysis (ISA), or independent component analysis (ICA) if all the hidden, independent sources are one-dimensional. One of the most fundamental results of this research field is the ISA separation principle, which states that the ISA problem can be solved by traditional ICA up to permutation. This principle (i) forms the basis of the state-of-the-art ISA solvers and (ii) enables one to estimate the unknown number and the dimensions of the sources efficiently. We (i) extend the ISA problem to several new directions including the controlled, the partially observed, the complex valued and the nonparametric case and (ii) derive separation principle based solution techniques for the generalizations. This solution approach (i) makes it possible to apply state-of-the-art algorithms for the obtained subproblems (in the ISA example ICA and clustering) and (ii) handles the case of unknown dimensional sources. Our extensive numerical experiments demonstrate the robustness and efficiency of our approach

    Postnonlinear overcomplete blind source separation using sparse sources

    No full text
    We present an approach for blindly decomposing an observed random vector x into As where f is a diagonal function i.e. f=f_1 x ... x f_m with one-dimensional functions f_i and A an (m x n)-matrix. This postnonlinear model is allowed to be overcomplete, which means that less observations than sources (m<\ltn) are given. In contrast to Independent Component Analysis (ICA) we do not assume the sources s to be independent but to be sparse in the sense that at each time instant they have at most (m-1) non-zero components (Sparse Component Analysis or SCA). Identifiability of the model is shown, and an algorithm for model and source recovery is proposed. It first detects the postnonlinearities in each component, and then identifies the now linearized model using previous results
    corecore