400 research outputs found

    Multichannel dynamic modeling of non-Gaussian mixtures

    Full text link
    [EN] This paper presents a novel method that combines coupled hidden Markov models (HMM) and non Gaussian mixture models based on independent component analyzer mixture models (ICAMM). The proposed method models the joint behavior of a number of synchronized sequential independent component analyzer mixture models (SICAMM), thus we have named it generalized SICAMM (G-SICAMM). The generalization allows for flexible estimation of complex data densities, subspace classification, blind source separation, and accurate modeling of both local and global dynamic interactions. In this work, the structured result obtained by G-SICAMM was used in two ways: classification and interpretation. Classification performance was tested on an extensive number of simulations and a set of real electroencephalograms (EEG) from epileptic patients performing neuropsychological tests. G-SICAMM outperformed the following competitive methods: Gaussian mixture models, HMM, Coupled HMM, ICAMM, SICAMM, and a long short-term memory (LSTM) recurrent neural network. As for interpretation, the structured result returned by G-SICAMM on EEGs was mapped back onto the scalp, providing a set of brain activations. These activations were consistent with the physiological areas activated during the tests, thus proving the ability of the method to deal with different kind of data densities and changing non-stationary and non-linear brain dynamics. (C) 2019 Elsevier Ltd. All rights reserved.This work was supported by Spanish Administration (Ministerio de Economia y Competitividad) and European Union (FEDER) under grants TEC2014-58438-R and TEC2017-84743-P.Safont Armero, G.; Salazar Afanador, A.; Vergara DomĂ­nguez, L.; Gomez, E.; Villanueva, V. (2019). Multichannel dynamic modeling of non-Gaussian mixtures. Pattern Recognition. 93:312-323. https://doi.org/10.1016/j.patcog.2019.04.022S3123239

    Extensions of Independent Component Analysis Mixture Models for classification and prediction of EEG signals

    Full text link
    [EN] This paper presents two applications of Independent Component Analysis Mixture Modeling (ICAMM) for the classification and prediction of data. The first one of these extensions is Sequential ICAMM (SICAMM), an ICAMM structure that takes into account the sequential dependence in the feature record. This algorithm can be used to classify input observations in a given set of mutually-exclusive classes. The performance of SICAMM is tested with simulations and compared against that of the base ICAMM algorithm and of a Dynamic Bayesian Network (DBN). All three methods are also used to classify real electroencephalographic (EEG) signals to compute hypnograms, a clinical tool used to help in the diagnosis of sleep disorders. The second extension of ICAMM is PREDICAMM, an estimation algorithm that makes use of the ICAMM parameters in order to reconstruct missing samples from a set of data. This predictor is used to reconstruct real EEG data from a working memory experiment, and its performance is compared to that of a classical predictor for EEG signals: sphere splines. Prediction performance is measured with four error indicators: signal-to-interference ratio, KullbackLeibler divergence, correlation, and mean structural similarity index. Both extensions of the base ICAMM algorithm have achieved a higher performance than other methodsThis work has been supported by Universitat Politècnica de Valencia under grant 20130072, Generalitat Valenciana under grants PROMETEO/2010/040 and ISIC/2012/006; and Spanish Administration and European Union FEDER Programme under grant TEC2011-23403 01/01/2012. The PSG signals and annotated hypnograms were provided by the Electroencephalography Department of Hospital Universitario La Fe, Valencia, SpainSafont Armero, G.; Salazar Afanador, A.; Rodriguez Martinez, A.; Vergara Domínguez, L. (2013). Extensions of Independent Component Analysis Mixture Models for classification and prediction of EEG signals. WAVES. 5:59-68. http://hdl.handle.net/10251/52797S5968

    Single-channel source separation using non-negative matrix factorization

    Get PDF

    Extraction et débruitage de signaux ECG du foetus.

    Get PDF
    Les malformations cardiaques congénitales sont la première cause de décès liés à une anomalie congénitale. L electrocardiogramme du fœtus (ECGf), qui est censé contenir beaucoup plus d informations par rapport aux méthodes échographiques conventionnelles, peut être mesuré e par des électrodes sur l abdomen de la mère. Cependant, il est tres faible et mélangé avec plusieurs sources de bruit et interférence y compris l ECG de la mère (ECGm) dont le niveau est très fort. Dans les études précédentes, plusieurs méthodes ont été proposées pour l extraction de l ECGf à partir des signaux enregistrés par des électrodes placées à la surface du corps de la mère. Cependant, ces méthodes nécessitent un nombre de capteurs important, et s avèrent inefficaces avec un ou deux capteurs. Dans cette étude trois approches innovantes reposant sur une paramétrisation algébrique, statistique ou par variables d état sont proposées. Ces trois méthodes mettent en œuvre des modélisations différentes de la quasi-périodicité du signal cardiaque. Dans la première approche, le signal cardiaque et sa variabilité sont modélisés par un filtre de Kalman. Dans la seconde approche, le signal est découpé en fenêtres selon les battements, et l empilage constitue un tenseur dont on cherchera la décomposition. Dans la troisième approche, le signal n est pas modélisé directement, mais il est considéré comme un processus Gaussien, caractérisé par ses statistiques à l ordre deux. Dans les différentes modèles, contrairement aux études précédentes, l ECGm et le (ou les) ECGf sont modélisés explicitement. Les performances des méthodes proposées, qui utilisent un nombre minimum de capteurs, sont évaluées sur des données synthétiques et des enregistrements réels, y compris les signaux cardiaques des fœtus jumeaux.Congenital heart defects are the leading cause of birth defect-related deaths. The fetal electrocardiogram (fECG), which is believed to contain much more information as compared with conventional sonographic methods, can be measured by placing electrodes on the mother s abdomen. However, it has very low power and is mixed with several sources of noise and interference, including the strong maternal ECG (mECG). In previous studies, several methods have been proposed for the extraction of fECG signals recorded from the maternal body surface. However, these methods require a large number of sensors, and are ineffective with only one or two sensors. In this study, state modeling, statistical and deterministic approaches are proposed for capturing weak traces of fetal cardiac signals. These three methods implement different models of the quasi-periodicity of the cardiac signal. In the first approach, the heart rate and its variability are modeled by a Kalman filter. In the second approach, the signal is divided into windows according to the beats. Stacking the windows constructs a tensor that is then decomposed. In a third approach, the signal is not directly modeled, but it is considered as a Gaussian process characterized by its second order statistics. In all the different proposed methods, unlike previous studies, mECG and fECG(s) are explicitly modeled. The performances of the proposed methods, which utilize a minimal number of electrodes, are assessed on synthetic data and actual recordings including twin fetal cardiac signals.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Independent component analysis of magnetoencephalographic signals

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Probabilistic models for structured sparsity

    Get PDF

    Dimensionality reduction and unsupervised learning techniques applied to clinical psychiatric and neuroimaging phenotypes

    Get PDF
    Unsupervised learning and other multivariate analysis techniques are increasingly recognized in neuropsychiatric research. Here, finite mixture models and random forests were applied to clinical observations of patients with major depression to detect and validate treatment response subgroups. Further, independent component analysis and agglomerative hierarchical clustering were combined to build a brain parcellation solely on structural covariance information of magnetic resonance brain images. Übersetzte Kurzfassung: Unüberwachtes Lernen und andere multivariate Analyseverfahren werden zunehmend auf neuropsychiatrische Fragestellungen angewendet. Finite mixture Modelle wurden auf klinische Skalen von Patienten mit schwerer Depression appliziert, um Therapieantwortklassen zu bilden und mit Random Forests zu validieren. Unabhängigkeitsanalysen und agglomeratives hierarchisches Clustering wurden kombiniert, um die strukturelle Kovarianz von Magnetresonanz­tomographie-Bildern für eine Hirnparzellierung zu nutzen
    • …
    corecore