35 research outputs found

    Orthogonal Extended Infomax Algorithm

    Full text link
    The extended infomax algorithm for independent component analysis (ICA) can separate sub- and super-Gaussian signals but converges slowly as it uses stochastic gradient optimization. In this paper, an improved extended infomax algorithm is presented that converges much faster. Accelerated convergence is achieved by replacing the natural gradient learning rule of extended infomax by a fully-multiplicative orthogonal-group based update scheme of the unmixing matrix leading to an orthogonal extended infomax algorithm (OgExtInf). Computational performance of OgExtInf is compared with two fast ICA algorithms: the popular FastICA and Picard, a L-BFGS algorithm belonging to the family of quasi-Newton methods. Our results demonstrate superior performance of the proposed method on small-size EEG data sets as used for example in online EEG processing systems, such as brain-computer interfaces or clinical systems for spike and seizure detection.Comment: 17 pages, 6 figure

    How to Apply ICA on Actual Data? Example of Mars Hyperspectral Image Analysis

    Get PDF
    International audienceAs any estimation method, results provided by ICA are dependent of a model — usually a linear mixture and separation model — and of a criterion — usually independence. In many actual problems, the model is a coarse approximation of the system physics and independence can be more or less satisfied, and consequently results are not reliable. Moreover, with many actual data, there is a lack of reliable knowledge on the sources to be extracted, and the interpretation of the independent components (IC) must be done very carefully, using partial prior information and with interactive discussions with experts. In this talk, we explain how such a scientific method can take place on the example of analysis of Mars hyperspectral images

    Sparse Linear Identifiable Multivariate Modeling

    Full text link
    In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully Bayesian hierarchy for sparse models using slab and spike priors (two-component delta-function and continuous mixtures), non-Gaussian latent factors and a stochastic search over the ordering of the variables. The framework, which we call SLIM (Sparse Linear Identifiable Multivariate modeling), is validated and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable computational complexity. We attribute this mainly to the stochastic search strategy used, and to parsimony (sparsity and identifiability), which is an explicit part of the model. We propose two extensions to the basic i.i.d. linear framework: non-linear dependence on observed variables, called SNIM (Sparse Non-linear Identifiable Multivariate modeling) and allowing for correlations between latent variables, called CSLIM (Correlated SLIM), for the temporal and/or spatial data. The source code and scripts are available from http://cogsys.imm.dtu.dk/slim/.Comment: 45 pages, 17 figure

    Exploratory source separation in biomedical systems

    Get PDF
    Contemporary science produces vast amounts of data. The analysis of this data is in a central role for all empirical sciences as well as humanities and arts using quantitative methods. One central role of an information scientist is to provide this research with sophisticated, computationally tractable data analysis tools. When the information scientist confronts a new target field of research producing data for her to analyse, she has two options: She may make some specific hypotheses, or guesses, on the contents of the data, and test these using statistical analysis. On the other hand, she may use general purpose statistical models to get a better insight into the data before making detailed hypotheses. Latent variable models present a case of such general models. In particular, such latent variable models are discussed where the measured data is generated by some hidden sources through some mapping. The task of source separation is to recover the sources. Additionally, one may be interested in the details of the generation process itself. We argue that when little is known of the target field, independent component analysis (ICA) serves as a valuable tool to solve a problem called blind source separation (BSS). BSS means solving a source separation problem with no, or at least very little, prior information. In case more is known of the target field, it is natural to incorporate the knowledge in the separation process. Hence, we also introduce methods for this incorporation. Finally, we suggest a general framework of denoising source separation (DSS) that can serve as a basis for algorithms ranging from almost blind approach to highly specialised and problem-tuned source separation algoritms. We show that certain ICA methods can be constructed in the DSS framework. This leads to new, more robust algorithms. It is natural to use the accumulated knowledge from applying BSS in a target field to devise more detailed source separation algorithms. We call this process exploratory source separation (ESS). We show that DSS serves as a practical and flexible framework to perform ESS, too. Biomedical systems, the nervous system, heart, etc., constitute arguably the most complex systems that human beings have ever studied. Furthermore, the contemporary physics and technology have made it possible to study these systems while they operate in near-natural conditions. The usage of these sophisticated instruments has resulted in a massive explosion of available data. In this thesis, we apply the developed source separation algorithms in the analysis of the human brain, using mainly magnetoencephalograms (MEG). The methods are directly usable for electroencephalograms (EEG) and with small adjustments for other imaging modalities, such as (functional) magnetic resonance imaging (fMRI), too.reviewe

    Applications of Blind Source Separation to the Magnetoencephalogram Background Activity in Alzheimer’s Disease

    Get PDF
    En esta Tesis Doctoral se ha analizado actividad basal de magnetoencefalograma (MEG) de 36 pacientes con la Enfermedad de Alzheimer (Alzheimer’s Disease, AD) y 26 sujetos de control de edad avanzada con técnicas de separación ciega de fuentes (Blind Source Separation, BSS). El objetivo era aplicar los métodos de BSS para ayudar en el análisis e interpretación de este tipo de actividad cerebral, prestando especial atención a la AD. El término BSS denota un conjunto de técnicas útiles para descomponer registros multicanal en las componentes que los dieron lugar. Cuatro diferentes aplicaciones han sido desarrolladas. Los resultados de esta Tesis Doctoral sugieren la utilidad de la BSS para ayudar en el procesado de la actividad basal de MEG y para identificar y caracterizar la AD.Departamento de Teoría de la Señal y Comunicaciones e Ingeniería Telemátic
    corecore