1,365 research outputs found

    A Python-based Brain-Computer Interface Package for Neural Data Analysis

    Get PDF
    Anowar, Md Hasan, A Python-based Brain-Computer Interface Package for Neural Data Analysis. Master of Science (MS), December, 2020, 70 pp., 4 tables, 23 figures, 74 references. Although a growing amount of research has been dedicated to neural engineering, only a handful of software packages are available for brain signal processing. Popular brain-computer interface packages depend on commercial software products such as MATLAB. Moreover, almost every brain-computer interface software is designed for a specific neuro-biological signal; there is no single Python-based package that supports motor imagery, sleep, and stimulated brain signal analysis. The necessity to introduce a brain-computer interface package that can be a free alternative for commercial software has motivated me to develop a toolbox using the python platform. In this thesis, the structure of MEDUSA, a brain-computer interface toolbox, is presented. The features of the toolbox are demonstrated with publicly available data sources. The MEDUSA toolbox provides a valuable tool to biomedical engineers and computational neuroscience researchers

    A systematic review on artifact removal and classification techniques for enhanced MEG-based BCI systems

    Get PDF
    Neurological disease victims may be completely paralyzed and unable to move, but they may still be able to think. Their brain activity is the only means by which they can interact with their environment. Brain-Computer Interface (BCI) research attempts to create tools that support subjects with disabilities. Furthermore, BCI research has expanded rapidly over the past few decades as a result of the interest in creating a new kind of human-to-machine communication. As magnetoencephalography (MEG) has superior spatial and temporal resolution than other approaches, it is being utilized to measure brain activity non-invasively. The recorded signal includes signals related to brain activity as well as noise and artifacts from numerous sources. MEG can have a low signal-to-noise ratio because the magnetic fields generated by cortical activity are small compared to other artifacts and noise. By using the right techniques for noise and artifact detection and removal, the signal-to-noise ratio can be increased. This article analyses various methods for removing artifacts as well as classification strategies. Additionally, this offers a study of the influence of Deep Learning models on the BCI system. Furthermore, the various challenges in collecting and analyzing MEG signals as well as possible study fields in MEG-based BCI are examined

    A statistical approach to the inverse problem in magnetoencephalography

    Full text link
    Magnetoencephalography (MEG) is an imaging technique used to measure the magnetic field outside the human head produced by the electrical activity inside the brain. The MEG inverse problem, identifying the location of the electrical sources from the magnetic signal measurements, is ill-posed, that is, there are an infinite number of mathematically correct solutions. Common source localization methods assume the source does not vary with time and do not provide estimates of the variability of the fitted model. Here, we reformulate the MEG inverse problem by considering time-varying locations for the sources and their electrical moments and we model their time evolution using a state space model. Based on our predictive model, we investigate the inverse problem by finding the posterior source distribution given the multiple channels of observations at each time rather than fitting fixed source parameters. Our new model is more realistic than common models and allows us to estimate the variation of the strength, orientation and position. We propose two new Monte Carlo methods based on sequential importance sampling. Unlike the usual MCMC sampling scheme, our new methods work in this situation without needing to tune a high-dimensional transition kernel which has a very high cost. The dimensionality of the unknown parameters is extremely large and the size of the data is even larger. We use Parallel Virtual Machine (PVM) to speed up the computation.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS716 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore