22 research outputs found

    Development and evaluation of a smartphone-based electroencephalography (EEG) system

    Get PDF
    The aim of the study was to design, develop and evaluate a general-purpose EEG platform which integrates with a smartphone. The target specification was a system with 19 EEG channels and data stored onto the smartphone via a Wi-Fi connection. The hardware was developed using three ADS1299 integrated circuits, and the game engine, Unity, was used to develop the smartphone app. An evaluation of the system was conducted using recordings of alpha waves during periods of eye closure in participants (Bland-Altman statistical comparison with a clinical grade EEG system). The smartphone was also used to deliver time-locked auditory stimuli using an oddball paradigm to evaluate the ability of the developed system to acquire event related potentials (ERP) during sitting and walking. No significant differences were found for the alpha wave peak amplitude, frequency and area under the curve for the intra-system (two consecutive periods of alpha waves) or inter-system (developed smartphone-based EEG system versus FDA-approved system) comparisons. ERP results showed the peak amplitude of the auditory P300 component to deviant tones was significantly higher when compared to standard tones for sitting and walking activities. It is envisaged that our general-purpose EEG system will encourage other researchers to design and build their own specific versions rather than being limited by the fixed features of commercial products

    Combining brain-computer interfaces with deep reinforcement learning for robot training: a feasibility study in a simulation environment

    Get PDF
    Deep reinforcement learning (RL) is used as a strategy to teach robot agents how to autonomously learn complex tasks. While sparsity is a natural way to define a reward in realistic robot scenarios, it provides poor learning signals for the agent, thus making the design of good reward functions challenging. To overcome this challenge learning from human feedback through an implicit brain-computer interface (BCI) is used. We combined a BCI with deep RL for robot training in a 3-D physical realistic simulation environment. In a first study, we compared the feasibility of different electroencephalography (EEG) systems (wet- vs. dry-based electrodes) and its application for automatic classification of perceived errors during a robot task with different machine learning models. In a second study, we compared the performance of the BCI-based deep RL training to feedback explicitly given by participants. Our findings from the first study indicate the use of a high-quality dry-based EEG-system can provide a robust and fast method for automatically assessing robot behavior using a sophisticated convolutional neural network machine learning model. The results of our second study prove that the implicit BCI-based deep RL version in combination with the dry EEG-system can significantly accelerate the learning process in a realistic 3-D robot simulation environment. Performance of the BCI-based trained deep RL model was even comparable to that achieved by the approach with explicit human feedback. Our findings emphasize the usage of BCI-based deep RL methods as a valid alternative in those human-robot applications where no access to cognitive demanding explicit human feedback is available

    Advancing Brain-Computer Interface System Performance in Hand Trajectory Estimation with NeuroKinect

    Full text link
    Brain-computer interface (BCI) technology enables direct communication between the brain and external devices, allowing individuals to control their environment using brain signals. However, existing BCI approaches face three critical challenges that hinder their practicality and effectiveness: a) time-consuming preprocessing algorithms, b) inappropriate loss function utilization, and c) less intuitive hyperparameter settings. To address these limitations, we present \textit{NeuroKinect}, an innovative deep-learning model for accurate reconstruction of hand kinematics using electroencephalography (EEG) signals. \textit{NeuroKinect} model is trained on the Grasp and Lift (GAL) tasks data with minimal preprocessing pipelines, subsequently improving the computational efficiency. A notable improvement introduced by \textit{NeuroKinect} is the utilization of a novel loss function, denoted as LStat\mathcal{L}_{\text{Stat}}. This loss function addresses the discrepancy between correlation and mean square error in hand kinematics prediction. Furthermore, our study emphasizes the scientific intuition behind parameter selection to enhance accuracy. We analyze the spatial and temporal dynamics of the motor movement task by employing event-related potential and brain source localization (BSL) results. This approach provides valuable insights into the optimal parameter selection, improving the overall performance and accuracy of the \textit{NeuroKinect} model. Our model demonstrates strong correlations between predicted and actual hand movements, with mean Pearson correlation coefficients of 0.92 (±\pm0.015), 0.93 (±\pm0.019), and 0.83 (±\pm0.018) for the X, Y, and Z dimensions. The precision of \textit{NeuroKinect} is evidenced by low mean squared errors (MSE) of 0.016 (±\pm0.001), 0.015 (±\pm0.002), and 0.017 (±\pm0.005) for the X, Y, and Z dimensions, respectively

    Review of Wireless Brain-Computer Interface Systems

    Get PDF

    Review on solving the forward problem in EEG source analysis

    Get PDF
    Background. The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods. While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results. It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. Conclusion. Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem.peer-reviewe

    EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and Their Applications.

    Full text link
    Brain-Computer interfaces (BCIs) enhance the capability of human brain activities to interact with the environment. Recent advancements in technology and machine learning algorithms have increased interest in electroencephalographic (EEG)-based BCI applications. EEG-based intelligent BCI systems can facilitate continuous monitoring of fluctuations in human cognitive states under monotonous tasks, which is both beneficial for people in need of healthcare support and general researchers in different domain areas. In this review, we survey the recent literature on EEG signal sensing technologies and computational intelligence approaches in BCI applications, compensating for the gaps in the systematic summary of the past five years. Specifically, we first review the current status of BCI and signal sensing technologies for collecting reliable EEG signals. Then, we demonstrate state-of-the-art computational intelligence techniques, including fuzzy models and transfer learning in machine learning and deep learning algorithms, to detect, monitor, and maintain human cognitive states and task performance in prevalent applications. Finally, we present a couple of innovative BCI-inspired healthcare applications and discuss future research directions in EEG-based BCI research

    Estimation of missing air pollutant data using a spatiotemporal convolutional autoencoder

    Get PDF
    A key challenge in building machine learning models for time series prediction is the incompleteness of the datasets. Missing data can arise for a variety of reasons, including sensor failure and network outages, resulting in datasets that can be missing significant periods of measurements. Models built using these datasets can therefore be biased. Although various methods have been proposed to handle missing data in many application areas, more air quality missing data prediction requires additional investigation. This study proposes an autoencoder model with spatiotemporal considerations to estimate missing values in air quality data. The model consists of one-dimensional convolution layers, making it flexible to cover spatial and temporal behaviours of air contaminants. This model exploits data from nearby stations to enhance predictions at the target station with missing data. This method does not require additional external features, such as weather and climate data. The results show that the proposed method effectively imputes missing data for discontinuous and long-interval interrupted datasets. Compared to univariate imputation techniques (most frequent, median and mean imputations), our model achieves up to 65% RMSE improvement and 20–40% against multivariate imputation techniques (decision tree, extra-trees, k-nearest neighbours and Bayesian ridge regressors). Imputation performance degrades when neighbouring stations are negatively correlated or weakly correlated

    Scan Once, Analyse Many: Using large open-access neuroimaging datasets to understand the brain

    Get PDF
    We are now in a time of readily available brain imaging data. Not only are researchers now sharing data more than ever before, but additionally large-scale data collecting initiatives are underway with the vision that many future researchers will use the data for secondary analyses. Here I provide an overview of available datasets and some example use cases. Example use cases include examining individual differences, more robust findings, reproducibility–both in public input data and availability as a replication sample, and methods development. I further discuss a variety of considerations associated with using existing data and the opportunities associated with large datasets. Suggestions for further readings on general neuroimaging and topic-specific discussions are also provided
    corecore