13 research outputs found

    NeuroPrime: a Pythonic framework for the priming of brain states in self-regulation protocols

    Get PDF
    Due to the recent pandemic and a general boom in technology, we are facing more and more threats of isolation, depression, fear, overload of information, between others. In turn, these affect our Self, psychologically and physically. Therefore, new tools are required to assist the regulation of this unregulated Self to a more personalized, optimal and healthy Self. As such, we developed a Pythonic open-source humancomputer framework for assisted priming of subjects to “optimally” self-regulate their Neurofeedback (NF) with external stimulation, like guided mindfulness. For this, we did a three-part study in which: 1) we defined the foundations of the framework and its design for priming subjects to self-regulate their NF, 2) developed an open-source version of the framework in Python, NeuroPrime, for utility, expandability and reusability, and 3) we tested the framework in neurofeedback priming versus no-priming conditions. NeuroPrime is a research toolbox developed for the simple and fast integration of advanced online closed-loop applications. More specifically, it was validated and tuned for the research of priming brain states in an EEG neurofeedback setup. In this paper, we will explain the key aspects of the priming framework, the NeuroPrime software developed, the design decisions and demonstrate/validate the use of our toolbox by presenting use cases of priming brain states during a neurofeedback setup.MIT -Massachusetts Institute of Technology(PD/BD/114033/2015

    Tools for Brain-Computer Interaction: A General Concept for a Hybrid BCI

    Get PDF
    The aim of this work is to present the development of a hybrid Brain-Computer Interface (hBCI) which combines existing input devices with a BCI. Thereby, the BCI should be available if the user wishes to extend the types of inputs available to an assistive technology system, but the user can also choose not to use the BCI at all; the BCI is active in the background. The hBCI might decide on the one hand which input channel(s) offer the most reliable signal(s) and switch between input channels to improve information transfer rate, usability, or other factors, or on the other hand fuse various input channels. One major goal therefore is to bring the BCI technology to a level where it can be used in a maximum number of scenarios in a simple way. To achieve this, it is of great importance that the hBCI is able to operate reliably for long periods, recognizing and adapting to changes as it does so. This goal is only possible if many different subsystems in the hBCI can work together. Since one research institute alone cannot provide such different functionality, collaboration between institutes is necessary. To allow for such a collaboration, a new concept and common software framework is introduced. It consists of four interfaces connecting the classical BCI modules: signal acquisition, preprocessing, feature extraction, classification, and the application. But it provides also the concept of fusion and shared control. In a proof of concept, the functionality of the proposed system was demonstrated

    Brain-computer interfacing using modulations of alpha activity induced by covert shifts of attention

    Get PDF
    Contains fulltext : 99949.pdf (publisher's version ) (Open Access)Background: Visual brain-computer interfaces (BCIs) often yield high performance only when targets are fixated with the eyes. Furthermore, many paradigms use intense visual stimulation, which can be irritating especially in long BCI sessions. However, BCIs can more directly directly tap the neural processes underlying visual attention. Covert shifts of visual attention induce changes in oscillatory alpha activity in posterior cortex, even in the absence of visual stimulation. The aim was to investigate whether different pairs of directions of attention shifts can be reliably differentiated based on the electroencephalogram. To this end, healthy participants (N = 8) had to strictly fixate a central dot and covertly shift visual attention to one out of six cued directions. Results: Covert attention shifts induced a prolonged alpha synchronization over posterior electrode sites (PO and O electrodes). Spectral changes had specific topographies so that different pairs of directions could be differentiated. There was substantial variation across participants with respect to the direction pairs that could be reliably classified. Mean accuracy for the best-classifiable pair amounted to 74.6%. Furthermore, an alpha power index obtained during a relaxation measurement showed to be predictive of peak BCI performance (r = .66). Conclusions: Results confirm posterior alpha power modulations as a viable input modality for gaze-independent EEG-based BCIs. The pair of directions yielding optimal performance varies across participants. Consequently, participants with low control for standard directions such as left-right might resort to other pairs of directions including top and bottom. Additionally, a simple alpha index was shown to predict prospective BCI performance.10 p

    Online detection of error-related potentials boosts the performance of mental typewriters

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Increasing the communication speed of brain-computer interfaces (BCIs) is a major aim of current BCI-research. The idea to automatically detect error-related potentials (ErrPs) in order to veto erroneous decisions of a BCI has been existing for more than one decade, but this approach was so far little investigated in online mode.</p> <p>Methods</p> <p>In our study with eleven participants, an ErrP detection mechanism was implemented in an electroencephalography (EEG) based gaze-independent visual speller.</p> <p>Results</p> <p>Single-trial ErrPs were detected with a mean accuracy of 89.1% (AUC 0.90). The spelling speed was increased on average by 49.0% using ErrP detection. The improvement in spelling speed due to error detection was largest for participants with low spelling accuracy.</p> <p>Conclusion</p> <p>The performance of BCIs can be increased by using an automatic error detection mechanism. The benefit for patients with motor disorders is potentially high since they often have rather low spelling accuracies compared to healthy people.</p

    Preparation and execution of voluntary action both contribute to awareness of intention

    Get PDF
    How and when motor intentions form has long been controversial. In particular, the extent to which motor preparation and action-related processes produce a conscious experience of intention remains unknown. Here, we used a brain–computer interface (BCI) while participants performed a self-paced movement task to trigger cues upon the detection of a readiness potential (a well-characterized brain signal that precedes movement) or in its absence. The BCI-triggered cues instructed participants either to move or not to move. Following this instruction, participants reported whether they felt they were about to move at the time the cue was presented. Participants were more likely to report an intention (i) when the cue was triggered by the presence of a readiness potential than when the same cue was triggered by its absence, and (ii) when they had just made an action than when they had not. We further describe a time-dependent integration of these two factors: the probability of reporting an intention was maximal when cues were triggered in the presence of a readiness potential, and when participants also executed an action shortly afterwards. Our results provide a first systematic investigation of how prospective and retrospective components are integrated in forming a conscious intention to move

    Error Potential detection during driving operations of a powered wheelchair

    Get PDF
    Modern technologies allow people with motor impairment (e.g., paraplegia and tetraplegia) to improve their general wealth. Among the modern technologies there is one of particular interest, called brain-computer interface (BCI) that directly connect the brain with a machine for the execution of a specific task. This technology can be used to promote the implementation of a brain-controlled wheelchair, which would significantly increase the independence of people suffering from severe motor disabilities. From the literature regarding the employing of the BCI for the assistance of people emerges a lack of studies of an algorithm that can adjust the wheelchair's trajectory if a problem emerges. In order to improve this aspect, we analysed if a specific brain response generated when an unexpected situation occurs (error-related potential (ErrP)) is reliably detectable; the final goal of this research was to create an offline-algorithm for the classification of the brain responses in order to set a baseline for the implementation of a online-algorithm of trajectory correction. The first chapter will present an overview of the BCI and the error-related potential and the possible applications of this technology. In the following chapters it will be highlighted the motivation and main point of focus of our research, along with the experimental protocol, the results obtained and the discussion of the results.Modern technologies allow people with motor impairment (e.g., paraplegia and tetraplegia) to improve their general wealth. Among the modern technologies there is one of particular interest, called brain-computer interface (BCI) that directly connect the brain with a machine for the execution of a specific task. This technology can be used to promote the implementation of a brain-controlled wheelchair, which would significantly increase the independence of people suffering from severe motor disabilities. From the literature regarding the employing of the BCI for the assistance of people emerges a lack of studies of an algorithm that can adjust the wheelchair's trajectory if a problem emerges. In order to improve this aspect, we analysed if a specific brain response generated when an unexpected situation occurs (error-related potential (ErrP)) is reliably detectable; the final goal of this research was to create an offline-algorithm for the classification of the brain responses in order to set a baseline for the implementation of a online-algorithm of trajectory correction. The first chapter will present an overview of the BCI and the error-related potential and the possible applications of this technology. In the following chapters it will be highlighted the motivation and main point of focus of our research, along with the experimental protocol, the results obtained and the discussion of the results

    A Python-based Brain-Computer Interface Package for Neural Data Analysis

    Get PDF
    Anowar, Md Hasan, A Python-based Brain-Computer Interface Package for Neural Data Analysis. Master of Science (MS), December, 2020, 70 pp., 4 tables, 23 figures, 74 references. Although a growing amount of research has been dedicated to neural engineering, only a handful of software packages are available for brain signal processing. Popular brain-computer interface packages depend on commercial software products such as MATLAB. Moreover, almost every brain-computer interface software is designed for a specific neuro-biological signal; there is no single Python-based package that supports motor imagery, sleep, and stimulated brain signal analysis. The necessity to introduce a brain-computer interface package that can be a free alternative for commercial software has motivated me to develop a toolbox using the python platform. In this thesis, the structure of MEDUSA, a brain-computer interface toolbox, is presented. The features of the toolbox are demonstrated with publicly available data sources. The MEDUSA toolbox provides a valuable tool to biomedical engineers and computational neuroscience researchers

    Pyff – A Pythonic Framework for Feedback Applications and Stimulus Presentation in Neuroscience

    Get PDF
    This paper introduces Pyff, the Pythonic feedback framework for feedback applications and stimulus presentation. Pyff provides a platform-independent framework that allows users to develop and run neuroscientific experiments in the programming language Python. Existing solutions have mostly been implemented in C++, which makes for a rather tedious programming task for non-computer-scientists, or in Matlab, which is not well suited for more advanced visual or auditory applications. Pyff was designed to make experimental paradigms (i.e., feedback and stimulus applications) easily programmable. It includes base classes for various types of common feedbacks and stimuli as well as useful libraries for external hardware such as eyetrackers. Pyff is also equipped with a steadily growing set of ready-to-use feedbacks and stimuli. It can be used as a standalone application, for instance providing stimulus presentation in psychophysics experiments, or within a closed loop such as in biofeedback or brain–computer interfacing experiments. Pyff communicates with other systems via a standardized communication protocol and is therefore suitable to be used with any system that may be adapted to send its data in the specified format. Having such a general, open-source framework will help foster a fruitful exchange of experimental paradigms between research groups. In particular, it will decrease the need of reprogramming standard paradigms, ease the reproducibility of published results, and naturally entail some standardization of stimulus presentation

    A brain-machine interface for assistive robotic control

    Get PDF
    Brain-machine interfaces (BMIs) are the only currently viable means of communication for many individuals suffering from locked-in syndrome (LIS) – profound paralysis that results in severely limited or total loss of voluntary motor control. By inferring user intent from task-modulated neurological signals and then translating those intentions into actions, BMIs can enable LIS patients increased autonomy. Significant effort has been devoted to developing BMIs over the last three decades, but only recently have the combined advances in hardware, software, and methodology provided a setting to realize the translation of this research from the lab into practical, real-world applications. Non-invasive methods, such as those based on the electroencephalogram (EEG), offer the only feasible solution for practical use at the moment, but suffer from limited communication rates and susceptibility to environmental noise. Maximization of the efficacy of each decoded intention, therefore, is critical. This thesis addresses the challenge of implementing a BMI intended for practical use with a focus on an autonomous assistive robot application. First an adaptive EEG- based BMI strategy is developed that relies upon code-modulated visual evoked potentials (c-VEPs) to infer user intent. As voluntary gaze control is typically not available to LIS patients, c-VEP decoding methods under both gaze-dependent and gaze- independent scenarios are explored. Adaptive decoding strategies in both offline and online task conditions are evaluated, and a novel approach to assess ongoing online BMI performance is introduced. Next, an adaptive neural network-based system for assistive robot control is presented that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. Exploratory learning, or “learning by doing,” is an unsupervised method in which the robot is able to build an internal model for motor planning and coordination based on real-time sensory inputs received during exploration. Finally, a software platform intended for practical BMI application use is developed and evaluated. Using online c-VEP methods, users control a simple 2D cursor control game, a basic augmentative and alternative communication tool, and an assistive robot, both manually and via high-level goal-oriented commands
    corecore