178 research outputs found

    Brain-Switches for Asynchronous Brain−Computer Interfaces: A Systematic Review

    Get PDF
    A brain–computer interface (BCI) has been extensively studied to develop a novel communication system for disabled people using their brain activities. An asynchronous BCI system is more realistic and practical than a synchronous BCI system, in that, BCI commands can be generated whenever the user wants. However, the relatively low performance of an asynchronous BCI system is problematic because redundant BCI commands are required to correct false-positive operations. To significantly reduce the number of false-positive operations of an asynchronous BCI system, a two-step approach has been proposed using a brain-switch that first determines whether the user wants to use an asynchronous BCI system before the operation of the asynchronous BCI system. This study presents a systematic review of the state-of-the-art brain-switch techniques and future research directions. To this end, we reviewed brain-switch research articles published from 2000 to 2019 in terms of their (a) neuroimaging modality, (b) paradigm, (c) operation algorithm, and (d) performance

    Multimodal fuzzy fusion for enhancing the motor-imagery-based brain computer interface

    Get PDF
    © 2005-2012 IEEE. Brain-computer interface technologies, such as steady-state visually evoked potential, P300, and motor imagery are methods of communication between the human brain and the external devices. Motor imagery-based brain-computer interfaces are popular because they avoid unnecessary external stimuli. Although feature extraction methods have been illustrated in several machine intelligent systems in motor imagery-based brain-computer interface studies, the performance remains unsatisfactory. There is increasing interest in the use of the fuzzy integrals, the Choquet and Sugeno integrals, that are appropriate for use in applications in which fusion of data must consider possible data interactions. To enhance the classification accuracy of brain-computer interfaces, we adopted fuzzy integrals, after employing the classification method of traditional brain-computer interfaces, to consider possible links between the data. Subsequently, we proposed a novel classification framework called the multimodal fuzzy fusion-based brain-computer interface system. Ten volunteers performed a motor imagery-based brain-computer interface experiment, and we acquired electroencephalography signals simultaneously. The multimodal fuzzy fusion-based brain-computer interface system enhanced performance compared with traditional brain-computer interface systems. Furthermore, when using the motor imagery-relevant electroencephalography frequency alpha and beta bands for the input features, the system achieved the highest accuracy, up to 78.81% and 78.45% with the Choquet and Sugeno integrals, respectively. Herein, we present a novel concept for enhancing brain-computer interface systems that adopts fuzzy integrals, especially in the fusion for classifying brain-computer interface commands

    Thought-controlled games with brain-computer interfaces

    Get PDF
    Nowadays, EEG based BCI systems are starting to gain ground in games for health research. With reduced costs and promising an innovative and exciting new interaction paradigm, attracted developers and researchers to use them on video games for serious applications. However, with researchers focusing mostly on the signal processing part, the interaction aspect of the BCIs has been neglected. A gap between classification performance and online control quality for BCI based systems has been created by this research disparity, resulting in suboptimal interactions that lead to user fatigue and loss of motivation over time. Motor-Imagery (MI) based BCIs interaction paradigms can provide an alternative way to overcome motor-related disabilities, and is being deployed in the health environment to promote the functional and structural plasticity of the brain. A BCI system in a neurorehabilitation environment, should not only have a high classification performance, but should also provoke a high level of engagement and sense of control to the user, for it to be advantageous. It should also maximize the level of control on user’s actions, while not requiring them to be subject to long training periods on each specific BCI system. This thesis has two main contributions, the Adaptive Performance Engine, a system we developed that can provide up to 20% improvement to user specific performance, and NeuRow, an immersive Virtual Reality environment for motor neurorehabilitation that consists of a closed neurofeedback interaction loop based on MI and multimodal feedback while using a state-of-the-art Head Mounted Display.Hoje em dia, os sistemas BCI baseados em EEG estão a começar a ganhar terreno em jogos relacionados com a saúde. Com custos reduzidos e prometendo um novo e inovador paradigma de interação, atraiu programadores e investigadores para usá-los em vídeo jogos para aplicações sérias. No entanto, com os investigadores focados principalmente na parte do processamento de sinal, o aspeto de interação dos BCI foi negligenciado. Um fosso entre o desempenho da classificação e a qualidade do controle on-line para sistemas baseados em BCI foi criado por esta disparidade de pesquisa, resultando em interações subótimas que levam à fadiga do usuário e à perda de motivação ao longo do tempo. Os paradigmas de interação BCI baseados em imagética motora (IM) podem fornecer uma maneira alternativa de superar incapacidades motoras, e estão sendo implementados no sector da saúde para promover plasticidade cerebral funcional e estrutural. Um sistema BCI usado num ambiente de neuro-reabilitação, para que seja vantajoso, não só deve ter um alto desempenho de classificação, mas também deve promover um elevado nível de envolvimento e sensação de controlo ao utilizador. Também deve maximizar o nível de controlo nas ações do utilizador, sem exigir que sejam submetidos a longos períodos de treino em cada sistema BCI específico. Esta tese tem duas contribuições principais, o Adaptive Performance Engine, um sistema que desenvolvemos e que pode fornecer até 20% de melhoria para o desempenho específico do usuário, e NeuRow, um ambiente imersivo de Realidade Virtual para neuro-reabilitação motora, que consiste num circuito fechado de interação de neuro-feedback baseado em IM e feedback multimodal e usando um Head Mounted Display de última geração

    Data-driven multivariate and multiscale methods for brain computer interface

    Get PDF
    This thesis focuses on the development of data-driven multivariate and multiscale methods for brain computer interface (BCI) systems. The electroencephalogram (EEG), the most convenient means to measure neurophysiological activity due to its noninvasive nature, is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its multichannel recording nature require a new set of data-driven multivariate techniques to estimate more accurately features for enhanced BCI operation. Also, a long term goal is to enable an alternative EEG recording strategy for achieving long-term and portable monitoring. Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary EEG signal into a set of components which are highly localised in time and frequency. It is shown that the complex and multivariate extensions of EMD, which can exploit common oscillatory modes within multivariate (multichannel) data, can be used to accurately estimate and compare the amplitude and phase information among multiple sources, a key for the feature extraction of BCI system. A complex extension of local mean decomposition is also introduced and its operation is illustrated on two channel neuronal spike streams. Common spatial pattern (CSP), a standard feature extraction technique for BCI application, is also extended to complex domain using the augmented complex statistics. Depending on the circularity/noncircularity of a complex signal, one of the complex CSP algorithms can be chosen to produce the best classification performance between two different EEG classes. Using these complex and multivariate algorithms, two cognitive brain studies are investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user attention to a sound source among a mixture of sound stimuli, which is aimed at improving the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments elicited by taste and taste recall are examined to determine the pleasure and displeasure of a food for the implementation of affective computing. The separation between two emotional responses is examined using real and complex-valued common spatial pattern methods. Finally, we introduce a novel approach to brain monitoring based on EEG recordings from within the ear canal, embedded on a custom made hearing aid earplug. The new platform promises the possibility of both short- and long-term continuous use for standard brain monitoring and interfacing applications

    Signal Processing Combined with Machine Learning for Biomedical Applications

    Get PDF
    The Master’s thesis is comprised of four projects in the realm of machine learning and signal processing. The abstract of the thesis is divided into four parts and presented as follows, Abstract 1: A Kullback-Leibler Divergence-Based Predictor for Inter-Subject Associative BCI. Inherent inter-subject variability in sensorimotor brain dynamics hinders the transferability of brain-computer interface (BCI) model parameters across subjects. An individual training session is essential for effective BCI control to compensate for variability. We report a Kullback-Leibler Divergence (KLD)-based predictor for inter-subject associative BCI. An online dataset comprising left/right hand, both feet, and tongue motor imagery tasks was used to show correlation between the proposed inter-subject predictor and BCI performance. Linear regression between the KLD predictor and BCI performance showed a strong inverse correlation (r = -0.62). The KLD predictor can act as an indicator for generalized inter-subject associative BCI designs. Abstract 2: Multiclass Sensorimotor BCI Based on Simultaneous EEG and fNIRS. Hybrid BCI (hBCI) utilizes multiple data modalities to acquire brain signals during motor execution (ME) tasks. Studies have shown significant enhancements in the classification of binary class ME-hBCIs; however, four-class ME-hBCI classification is yet to be done using multiclass algorithms. We present a quad-class classification of ME-hBCI tasks from simultaneous EEG-fNIRS recordings. Appropriate features were extracted from EEG-fNIRS signals and combined for hybrid features and classified with support vector machine. Results showed a significant increase in hybrid accuracy over single modalities and show hybrid method’s performance enhancement capability. Abstract 3: Deep Learning for Improved Inter-Subject EEG-fNIRS Hybrid BCI Performance. Multimodality based hybrid BCI has become famous for performance improvement; however, the inherent inter-subject and inter-session variation between participants brain dynamics poses obstacles in achieving high performance. This work presents an inter-subject hBCI to classify right/left-hand MI tasks from simultaneous EEG-fNIRS recordings of 29 healthy subjects. State-of-art features were extracted from EEG-fNIRS signals and combined for hybrid features, and finally, classified using deep Long short-term memory classifier. Results showed an increase in the inter-subject performance for the hybrid system while making the system more robust to brain dynamics change and hints to the feasibility of EEG-fNIRS based inter-subject hBCI. Abstract 4: Microwave Based Glucose Concentration Classification by Machine Learning. Non-invasive blood sugar measurement attracts increased attention in recent years, given the increase in diabetes-related complications and inconvenience in the traditional ways using blood. This work utilized machine learning (ML) algorithms to classify glucose concentration (GC) from the measured broadband microwave scattering signals (S11). An N-type microwave adapter pair was utilized to measure the sweeping frequency scattering-parameter (S-parameter) of the glucose solutions with GC varying from 50-10,000 dg/dL. Dielectric parameters were retrieved from the measured wideband complex S-parameters based on the modified Debye dielectric dispersion model. Results indicate that the best algorithm can achieve a perfect classification accuracy and suggests an alternate way to develop a GC detection method using ML algorithms

    EEG-based brain-computer interfaces using motor-imagery: techniques and challenges.

    Get PDF
    Electroencephalography (EEG)-based brain-computer interfaces (BCIs), particularly those using motor-imagery (MI) data, have the potential to become groundbreaking technologies in both clinical and entertainment settings. MI data is generated when a subject imagines the movement of a limb. This paper reviews state-of-the-art signal processing techniques for MI EEG-based BCIs, with a particular focus on the feature extraction, feature selection and classification techniques used. It also summarizes the main applications of EEG-based BCIs, particularly those based on MI data, and finally presents a detailed discussion of the most prevalent challenges impeding the development and commercialization of EEG-based BCIs

    Network-based brain computer interfaces: principles and applications

    Full text link
    Brain-computer interfaces (BCIs) make possible to interact with the external environment by decoding the mental intention of individuals. BCIs can therefore be used to address basic neuroscience questions but also to unlock a variety of applications from exoskeleton control to neurofeedback (NFB) rehabilitation. In general, BCI usability critically depends on the ability to comprehensively characterize brain functioning and correctly identify the user s mental state. To this end, much of the efforts have focused on improving the classification algorithms taking into account localized brain activities as input features. Despite considerable improvement BCI performance is still unstable and, as a matter of fact, current features represent oversimplified descriptors of brain functioning. In the last decade, growing evidence has shown that the brain works as a networked system composed of multiple specialized and spatially distributed areas that dynamically integrate information. While more complex, looking at how remote brain regions functionally interact represents a grounded alternative to better describe brain functioning. Thanks to recent advances in network science, i.e. a modern field that draws on graph theory, statistical mechanics, data mining and inferential modelling, scientists have now powerful means to characterize complex brain networks derived from neuroimaging data. Notably, summary features can be extracted from these networks to quantitatively measure specific organizational properties across a variety of topological scales. In this topical review, we aim to provide the state-of-the-art supporting the development of a network theoretic approach as a promising tool for understanding BCIs and improve usability

    Motor priming in virtual reality can augment motor-imagery training efficacy in restorative brain-computer interaction: a within-subject analysis

    Get PDF
    The use of Brain-Computer Interface (BCI) technology in neurorehabilitation provides new strategies to overcome stroke-related motor limitations. Recent studies demonstrated the brain's capacity for functional and structural plasticity through BCI. However, it is not fully clear how we can take full advantage of the neurobiological mechanisms underlying recovery and how to maximize restoration through BCI. In this study we investigate the role of multimodal virtual reality (VR) simulations and motor priming (MP) in an upper limb motor-imagery BCI task in order to maximize the engagement of sensory-motor networks in a broad range of patients who can benefit from virtual rehabilitation training.info:eu-repo/semantics/publishedVersio

    Brain Music : Sistema generativo para la creación de música simbólica a partir de respuestas neuronales afectivas

    Get PDF
    gráficas, tablasEsta tesis de maestría presenta una metodología de aprendizaje profundo multimodal innovadora que fusiona un modelo de clasificación de emociones con un generador musical, con el propósito de crear música a partir de señales de electroencefalografía, profundizando así en la interconexión entre emociones y música. Los resultados alcanzan tres objetivos específicos: Primero, ya que el rendimiento de los sistemas interfaz cerebro-computadora varía considerablemente entre diferentes sujetos, se introduce un enfoque basado en la transferencia de conocimiento entre sujetos para mejorar el rendimiento de individuos con dificultades en sistemas de interfaz cerebro-computadora basados en el paradigma de imaginación motora. Este enfoque combina datos de EEG etiquetados con datos estructurados, como cuestionarios psicológicos, mediante un método de "Kernel Matching CKA". Utilizamos una red neuronal profunda (Deep&Wide) para la clasificación de la imaginación motora. Los resultados destacan su potencial para mejorar las habilidades motoras en interfaces cerebro-computadora. Segundo, proponemos una técnica innovadora llamada "Labeled Correlation Alignment"(LCA) para sonificar respuestas neurales a estímulos representados en datos no estructurados, como música afectiva. Esto genera características musicales basadas en la actividad cerebral inducida por las emociones. LCA aborda la variabilidad entre sujetos y dentro de sujetos mediante el análisis de correlación, lo que permite la creación de envolventes acústicos y la distinción entre diferente información sonora. Esto convierte a LCA en una herramienta prometedora para interpretar la actividad neuronal y su reacción a estímulos auditivos. Finalmente, en otro capítulo, desarrollamos una metodología de aprendizaje profundo de extremo a extremo para generar contenido musical MIDI (datos simbólicos) a partir de señales de actividad cerebral inducidas por música con etiquetas afectivas. Esta metodología abarca el preprocesamiento de datos, el entrenamiento de modelos de extracción de características y un proceso de emparejamiento de características mediante Deep Centered Kernel Alignment, lo que permite la generación de música a partir de señales EEG. En conjunto, estos logros representan avances significativos en la comprensión de la relación entre emociones y música, así como en la aplicación de la inteligencia artificial en la generación musical a partir de señales cerebrales. Ofrecen nuevas perspectivas y herramientas para la creación musical y la investigación en neurociencia emocional. Para llevar a cabo nuestros experimentos, utilizamos bases de datos públicas como GigaScience, Affective Music Listening y Deap Dataset (Texto tomado de la fuente)This master’s thesis presents an innovative multimodal deep learning methodology that combines an emotion classification model with a music generator, aimed at creating music from electroencephalography (EEG) signals, thus delving into the interplay between emotions and music. The results achieve three specific objectives: First, since the performance of brain-computer interface systems varies significantly among different subjects, an approach based on knowledge transfer among subjects is introduced to enhance the performance of individuals facing challenges in motor imagery-based brain-computer interface systems. This approach combines labeled EEG data with structured information, such as psychological questionnaires, through a "Kernel Matching CKA"method. We employ a deep neural network (Deep&Wide) for motor imagery classification. The results underscore its potential to enhance motor skills in brain-computer interfaces. Second, we propose an innovative technique called "Labeled Correlation Alignment"(LCA) to sonify neural responses to stimuli represented in unstructured data, such as affective music. This generates musical features based on emotion-induced brain activity. LCA addresses variability among subjects and within subjects through correlation analysis, enabling the creation of acoustic envelopes and the distinction of different sound information. This makes LCA a promising tool for interpreting neural activity and its response to auditory stimuli. Finally, in another chapter, we develop an end-to-end deep learning methodology for generating MIDI music content (symbolic data) from EEG signals induced by affectively labeled music. This methodology encompasses data preprocessing, feature extraction model training, and a feature matching process using Deep Centered Kernel Alignment, enabling music generation from EEG signals. Together, these achievements represent significant advances in understanding the relationship between emotions and music, as well as in the application of artificial intelligence in musical generation from brain signals. They offer new perspectives and tools for musical creation and research in emotional neuroscience. To conduct our experiments, we utilized public databases such as GigaScience, Affective Music Listening and Deap DatasetMaestríaMagíster en Ingeniería - Automatización IndustrialInvestigación en Aprendizaje Profundo y señales BiológicasEléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizale

    Review of the BCI competition IV

    Get PDF
    corecore