189 research outputs found

    Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction:a review

    Get PDF
    Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported

    Narrow Window Feature Extraction for EEG-Motor Imagery Classification using k-NN and Voting Scheme

    Get PDF
    Achieving consistent accuracy still big challenge in EEG based Motor Imagery classification since the nature of EEG signal is non-stationary, intra-subject and inter-subject dependent. To address this problems, we propose the feature extraction scheme employing statistical measurements in narrow window with channel instantiation approach. In this study, k-Nearest Neighbor is used and a voting scheme as final decision where the most detection in certain class will be a winner. In this channel instantiation scheme, where EEG channel become instance or record, seventeen EEG channels with motor related activity is used to reduce from 118 channels. We investigate five narrow windows combination in the proposed methods, i.e.: one, two, three, four and five windows. BCI competition III Dataset IVa is used to evaluate our proposed methods. Experimental results show that one window with all channel and a combination of five windows with reduced channel outperform all prior research with highest accuracy and lowest standard deviation. This results indicate that our proposed methods achieve consistent accuracy and promising for reliable BCI systems

    Brain electrical activity discriminant analysis using Reproducing Kernel Hilbert spaces

    Get PDF
    A deep an adequate understanding of the human brain functions has been an objective for interdisciplinar teams of scientists. Different types of technological acquisition methodologies, allow to capture some particular data that is related with brain activity. Commonly, the more used strategies are related with the brain electrical activity, where reflected neuronal interactions are reflected in the scalp and obtained via electrode arrays as time series. The processing of this type of brain electrical activity (BEA) data, poses some challenges that should be addressed carefully due their intrinsic properties. BEA in known to have a nonstationaty behavior and a high degree of variability dependenig of the stimulus or responses that are being adressed..

    Brain electrical activity discriminant analysis using Reproducing Kernel Hilbert spaces

    Get PDF
    A deep an adequate understanding of the human brain functions has been an objective for interdisciplinar teams of scientists. Different types of technological acquisition methodologies, allow to capture some particular data that is related with brain activity. Commonly, the more used strategies are related with the brain electrical activity, where reflected neuronal interactions are reflected in the scalp and obtained via electrode arrays as time series. The processing of this type of brain electrical activity (BEA) data, poses some challenges that should be addressed carefully due their intrinsic properties. BEA in known to have a nonstationaty behavior and a high degree of variability dependenig of the stimulus or responses that are being adressed..

    Study of Adaptation Methods Towards Advanced Brain-computer Interfaces

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Advanced Signal Processing Solutions for Brain-Computer Interfaces: From Theory to Practice

    Get PDF
    As the field of Brain-Computer Interfaces (BCI) is rapidly evolving within both academia and industry, the necessity of improving the signal processing module of such systems becomes of significant practical and theoretical importance. Additionally, the employment of Electroencephalography (EEG) headset, which is considered as the best non-invasive modality for collecting brain signals, offers a relatively more user-friendly experience, affordability, and flexibility of design to the developers of a BCI system. Motivated by the aforementioned facts, the thesis investigates several venues through which an EEG-based BCI can more accurately interpret the users' intention. The first part of the thesis is devoted to development of theoretical approaches by which the dimensionality of the collected EEG signals can be reduced with minimum information loss. In this part, two novel frameworks are proposed based on graph signal processing theory, referred to as the GD-BCI and the GDR-BCI, where the geometrical structure of the EEG electrodes are employed to define and configure the underlying graphs. The second part of the thesis is devoted to seeking practical, yet facile-to-implement, solutions to improve the classification accuracy of BCI systems. Finally, in the last part of the thesis, inspired by the recent surge of interest in hybrid BCIs, a novel framework is proposed for cuff-less blood pressure estimation to be further coupled with an EEG-based BCI. Referred to as the WAKE-BPAT, the proposed framework simultaneously processes Electrocardiography (ECG) and Photoplethysmogram (PPG) signals via an adaptive Kalman filtering approach

    Mental-State Estimation, 1987

    Get PDF
    Reports on the measurement and evaluation of the physiological and mental state of operators are presented

    An information theoretic learning framework based on Renyi’s α entropy for brain effective connectivity estimation

    Get PDF
    The interactions among neural populations distributed across different brain regions are at the core of cognitive and perceptual processing. Therefore, the ability of studying the flow of information within networks of connected neural assemblies is of fundamental importance to understand such processes. In that regard, brain connectivity measures constitute a valuable tool in neuroscience. They allow assessing functional interactions among brain regions through directed or non-directed statistical dependencies estimated from neural time series. Transfer entropy (TE) is one such measure. It is an effective connectivity estimation approach based on information theory concepts and statistical causality premises. It has gained increasing attention in the literature because it can capture purely nonlinear directed interactions, and is model free. That is to say, it does not require an initial hypothesis about the interactions present in the data. These properties make it an especially convenient tool in exploratory analyses. However, like any information-theoretic quantity, TE is defined in terms of probability distributions that in practice need to be estimated from data. A challenging task, whose outcome can significantly affect the results of TE. Also, it lacks a standard spectral representation, so it cannot reveal the local frequency band characteristics of the interactions it detects.Las interacciones entre poblaciones neuronales distribuidas en diferentes regiones del cerebro son el núcleo del procesamiento cognitivo y perceptivo. Por lo tanto, la capacidad de estudiar el flujo de información dentro de redes de conjuntos neuronales conectados es de fundamental importancia para comprender dichos procesos. En ese sentido, las medidas de conectividad cerebral constituyen una valiosa herramienta en neurociencia. Permiten evaluar interacciones funcionales entre regiones cerebrales a través de dependencias estadísticas dirigidas o no dirigidas estimadas a partir de series de tiempo. La transferencia de entropía (TE) es una de esas medidas. Es un enfoque de estimación de conectividad efectiva basada en conceptos de teoría de la información y premisas de causalidad estadística. Ha ganado una atención cada vez mayor en la literatura porque puede capturar interacciones dirigidas puramente no lineales y no depende de un modelo. Es decir, no requiere de una hipótesis inicial sobre las interacciones presentes en los datos. Estas propiedades la convierten en una herramienta especialmente conveniente en análisis exploratorios. Sin embargo, como cualquier concepto basado en teoría de la información, la TE se define en términos de distribuciones de probabilidad que en la práctica deben estimarse a partir de datos. Una tarea desafiante, cuyo resultado puede afectar significativamente los resultados de la TE. Además, carece de una representación espectral estándar, por lo que no puede revelar las características de banda de frecuencia local de las interacciones que detecta.DoctoradoDoctor(a) en IngenieríaContents List of Figures xi List of Tables xv Notation xvi 1 Preliminaries 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Probability distribution estimation as an intermediate step in TE computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 The lack of a spectral representation for TE . . . . . . . . . . . . 7 1.3 Theoretical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.1 Transfer entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.2 Granger causality . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Information theoretic learning from kernel matrices . . . . . . . . 12 1.4 Literature review on transfer entropy estimation . . . . . . . . . . . . . . 14 1.4.1 Transfer entropy in the frequency domain . . . . . . . . . . . . . . 17 1.5 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.1 General aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.2 Specific aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.6 Outline and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.6.1 Kernel-based Renyi’s transfer entropy . . . . . . . . . . . . . . . . 24 1.6.2 Kernel-based Renyi’s phase transfer entropy . . . . . . . . . . . . 24 1.6.3 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions . . . . . . . . . . . . . . . . 25 1.7 EEG databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Contents ix 1.7.1 Motor imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.7.2 Working memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.8 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2 Kernel-based Renyi’s transfer entropy 34 2.1 Kernel-based Renyi’s transfer entropy . . . . . . . . . . . . . . . . . . . . 35 2.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.1 VAR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.2 Modified linear Kus model . . . . . . . . . . . . . . . . . . . . . . 38 2.2.3 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.2.4 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.1 VAR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.2 Modified linear Kus model . . . . . . . . . . . . . . . . . . . . . . 46 2.3.3 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.3.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3 Kernel-based Renyi’s phase transfer entropy 60 3.1 Kernel-based Renyi’s phase transfer entropy . . . . . . . . . . . . . . . . 61 3.1.1 Phase-based effective connectivity estimation approaches considered in this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2.1 Neural mass models . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.2.3 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.1 Neural mass models . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions 84 4.1 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions . . . . . . . . . . . . . . . . . . . . . . . . . 85 x Contents 4.1.1 Transfer entropy for directed phase-amplitude interactions . . . . 85 4.1.2 Cross-frequency directionality . . . . . . . . . . . . . . . . . . . . 85 4.1.3 Phase transfer entropy and directed phase-amplitude interactions 86 4.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.2.1 Simulated phase-amplitude interactions . . . . . . . . . . . . . . . 88 4.2.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.3 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.3.1 Simulated phase-amplitude interactions . . . . . . . . . . . . . . . 92 4.3.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5 Final Remarks 100 5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.3 Academic products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.1 Journal papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.2 Conference papers . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.3.3 Conference presentations . . . . . . . . . . . . . . . . . . . . . . . 105 Appendix A Kernel methods and Renyi’s entropy estimation 106 A.1 Reproducing kernel Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . 106 A.1.1 Reproducing kernels . . . . . . . . . . . . . . . . . . . . . . . . . 106 A.1.2 Kernel-based learning . . . . . . . . . . . . . . . . . . . . . . . . . 107 A.2 Kernel-based estimation of Renyi’s entropy . . . . . . . . . . . . . . . . . 109 Appendix B Surface Laplacian 113 Appendix C Permutation testing 115 Appendix D Kernel-based relevance analysis 117 Appendix E Cao’s criterion 120 Appendix F Neural mass model equations 122 References 12
    corecore