12 research outputs found

    Planar Control of a Quadcopter Using a Zero-Training Brain Machine Interface Platform

    No full text
    International audienceBrain Machine interface (BMI) enables promising applications in neuroprosthesis and neurorehabilitation by controlling robotic devices based on the subject's intentions. In contrast to the earlier techniques using sensorimotor rhythms, the method here intends to directly extract information of imagined body kinematics and thus can significantly reduce training time. We developed a zero-training BMI platform that controls a quadcopter using noninvasively acquired brain signals. Scalp electroencephalogram (EEG) signals of a user's imaginary movements are collected in real-time and translated by a computer to control a quadcopter along a designated path in a two-dimensional space. The results showed a promising performance using the zero-training paradigm that can be utilized in controlling neuroprosthetic limbs and neurorehabilitation devices by patients with BCI illiteracy

    A Brain-Machine Interface for a Sequence Movement Control of a Robotic Arm

    No full text
    International audienceBrain Machine Interfaces (BMI) have become of interest during the past years. The brain activities are recorded by invasive or noninvasive approaches and translated into command signals to control external prosthetic devices such as computer cursor, wheelchair, and robotic arm. Although many studies confirmed the capability of BMI systems in controlling multi-degrees of freedom (DOF) prosthetic devices using invasive approaches, this area of research in noninvasive approaches is at the beginning stage. In this work, a new BMI robotic platform has been developed using noninvasive Electroencephalography (EEG) technology. EEG signals were acquired using 14 channels and through BCI2000 software (with high pass filter at 0.1Hz and low pass filter at 30Hz). A low-cost 6-DOF robotic arm was used for real-time object manipulation between two fixed points in the workspace. A successful manipulation task consisted of 6 sequential movements of the robotic arm to pick up an object from a point on the left side of the workspace and drop it at a fixed point on the right side of the workspace. The programmed movements were controlled by a developed 4-target cursor control task. The subject was instructed to use imagined body kinematics to control the cursor and the robotic arm movements by hitting the targets. After initial calibration and testing, the subject attempted to control the robotic arm. The most efficient target sequence consisted of hitting 6 targets in sequential order. A successful manipulation run was reported when the object was placed at the final point regardless of any mistakes. Among 10 runs of manipulation runs, a 70% success rate was reported for the overall object manipulation task (Table 1). The experiments here serve as a concept proof for the feasibility of developing a noninvasive BMI robotic platform for manipulation task with minimum training. In future work, we will test the platform on a greater subject population with more complicated tasks such as manipulating an object between two arbitrary locations and in three dimensional space

    Evaluating the Feasibility of Visual Imagery for an EEG-Based Brain–Computer Interface

    No full text
    Visual imagery, or the mental simulation of visual information from memory, could serve as an effective control paradigm for a brain-computer interface (BCI) due to its ability to directly convey the user’s intention with many natural ways of envisioning an intended action. However, multiple initial investigations into using visual imagery as a BCI control strategies have been unable to fully evaluate the capabilities of true spontaneous visual mental imagery. One major limitation in these prior works is that the target image is typically displayed immediately preceding the imagery period. This paradigm does not capture spontaneous mental imagery as would be necessary in an actual BCI application but something more akin to short-term retention in visual working memory. Results from the present study show that short-term visual imagery following the presentation of a specific target image provides a stronger, more easily classifiable neural signature in EEG than spontaneous visual imagery from long-term memory following an auditory cue for the image. We also show that short-term visual imagery and visual perception share commonalities in the most predictive electrodes and spectral features. However, visual imagery received greater influence from frontal electrodes whereas perception was mostly confined to occipital electrodes. This suggests that visual perception is primarily driven by sensory information whereas visual imagery has greater contributions from areas associated with memory and attention. This work provides the first direct comparison of short-term and long-term visual imagery tasks and provides greater insight into the feasibility of using visual imagery as a BCI control strategy

    Clash of Minds: A BCI Car Racing Game in Simulated Virtual Reality Environment

    No full text
    International audienceIntroduction: In this work, we have designed and developed a BCI racing car game in a three-dimensional (3D) virtual environment using Unity software. The 3D virtual environment consists of two racing cars, tracks, as well as surrounding terrain that includes trees, grass, buildings, mountains, and the sky. Three cameras have been set up to show a driver's view, a bird's-eye view, and a following camera's view. Kinetic parameters of the cars are chosen to simulate physical movements of the car. The two racing cars are separately controlled by two individual drivers' brainwaves. Each driver's EEG brainwaves are monitored in real time using a g.tec Nautilus 32-channels system through Matlab and Simulink. The collected brainwave signals are analyzed online in Matlab using pre-trained machine learning algorithms to decode the intended kinematics. The machine learning algorithms have been trained to classify the driver's instantaneous intention into s categories: moving forward, moving backward, turning left, turning right, and maintaining rest. A control signal is then calculated based on the decoded kinematics and sent through the TCP/IP protocol to Unity to steer the car. Materials and Methods: We implement a hybrid decoding algorithm that combines steady state visual evoked potential (SSVEP) and imagined-body kinematics (IBK) paradigms. SSVEP can provide relatively high signal-to-noise ratio (SNR) and information transfer rate (ITR) [1] while IBK provides natural imaginary body movement [3]. A two-phase training protocol was designed to train a subject to learn to use the BCI. Signals collected during Phase 1 training are used to train the SSVEP paradigm. Canonical correlation analysis [2] is used to calculate the canonical correlation of the projected EEG and the target frequencies (7.5, 10, 12, and 15Hz) in real-time. In phase two, we train a cross-validated multiple linear regression model for decoding EEG data during IBK paradigms conditioned with two classes of resting and pushing. The overall decoding algorithm is a combination of the SSVEP and the IBK paradigms. We utilize the IBK paradigm as a gating function. If the online model detects the "pushing" state, the subject's intended direction by the SSVEP paradigm is translated into the virtual car movement. If the IBK model detects "resting" state, the virtual car remains stationary. Results: The platform including GUI interface, Unity 3D environment, training protocol, data acquisition processes, communication protocols, as well as online and offline decoding algorithms, has been fully implemented. The platform has been thoroughly tested to ensure it runs as expected. The current work is focused on recruiting subjects to improve accuracy and robustness of the decoding algorithms. Conclusions: Conventional BCI-based biofeedback systems often fail at maintaining a user's engagement and motivation which makes it difficult to attain a satisfactory level of control. The proposed platform is designed to improve users' experience of motor imagery with a visual feedback of the users' intention on to a virtual car. It serves as a pilot for online BCI-based gaming as well as an educational tool to promote interests in BCI for the public. The setup has potential to use as a research tool for investigators in developmental psychology and behavioral science. Moreover, the developed platform opens the opportunity for us to evaluate users' immersion in virtual reality (VR) using the proposed platform in future study. The platform may suggest an effective visual feedback of vision, which can lay a foundation for BCI application. Acknowledgement: This work was in part supported by NEURONET and a SARIF grant from the University of Tennessee. Reference: [1] Bakardjian, Hovagim, Toshihisa Tanaka, and Andrzej Cichocki. "Optimization of SSVEP brain responses with application to eight-command Brain-Computer Interface." Neuroscience letters 469.1 (2010): 34-38. [2] Zhang, Y. U., et al. "Frequency recognition in SSVEP-based BCI using multiset canonical correlation analysis." International journal of neural systems 24.04 (2014): 1450013. [3] Bradberry, Trent J., Rodolphe J. Gentili, and José L. Contreras-Vidal. "Fast attainment of computer cursor control with noninvasively acquired brain signals." Journal of neural engineering 8.3 (2011): 036010

    Clash of Minds: A BCI Car Racing Game in Simulated Virtual Reality Environment

    No full text
    International audienceIntroduction: In this work, we have designed and developed a BCI racing car game in a three-dimensional (3D) virtual environment using Unity software. The 3D virtual environment consists of two racing cars, tracks, as well as surrounding terrain that includes trees, grass, buildings, mountains, and the sky. Three cameras have been set up to show a driver's view, a bird's-eye view, and a following camera's view. Kinetic parameters of the cars are chosen to simulate physical movements of the car. The two racing cars are separately controlled by two individual drivers' brainwaves. Each driver's EEG brainwaves are monitored in real time using a g.tec Nautilus 32-channels system through Matlab and Simulink. The collected brainwave signals are analyzed online in Matlab using pre-trained machine learning algorithms to decode the intended kinematics. The machine learning algorithms have been trained to classify the driver's instantaneous intention into s categories: moving forward, moving backward, turning left, turning right, and maintaining rest. A control signal is then calculated based on the decoded kinematics and sent through the TCP/IP protocol to Unity to steer the car. Materials and Methods: We implement a hybrid decoding algorithm that combines steady state visual evoked potential (SSVEP) and imagined-body kinematics (IBK) paradigms. SSVEP can provide relatively high signal-to-noise ratio (SNR) and information transfer rate (ITR) [1] while IBK provides natural imaginary body movement [3]. A two-phase training protocol was designed to train a subject to learn to use the BCI. Signals collected during Phase 1 training are used to train the SSVEP paradigm. Canonical correlation analysis [2] is used to calculate the canonical correlation of the projected EEG and the target frequencies (7.5, 10, 12, and 15Hz) in real-time. In phase two, we train a cross-validated multiple linear regression model for decoding EEG data during IBK paradigms conditioned with two classes of resting and pushing. The overall decoding algorithm is a combination of the SSVEP and the IBK paradigms. We utilize the IBK paradigm as a gating function. If the online model detects the "pushing" state, the subject's intended direction by the SSVEP paradigm is translated into the virtual car movement. If the IBK model detects "resting" state, the virtual car remains stationary. Results: The platform including GUI interface, Unity 3D environment, training protocol, data acquisition processes, communication protocols, as well as online and offline decoding algorithms, has been fully implemented. The platform has been thoroughly tested to ensure it runs as expected. The current work is focused on recruiting subjects to improve accuracy and robustness of the decoding algorithms. Conclusions: Conventional BCI-based biofeedback systems often fail at maintaining a user's engagement and motivation which makes it difficult to attain a satisfactory level of control. The proposed platform is designed to improve users' experience of motor imagery with a visual feedback of the users' intention on to a virtual car. It serves as a pilot for online BCI-based gaming as well as an educational tool to promote interests in BCI for the public. The setup has potential to use as a research tool for investigators in developmental psychology and behavioral science. Moreover, the developed platform opens the opportunity for us to evaluate users' immersion in virtual reality (VR) using the proposed platform in future study. The platform may suggest an effective visual feedback of vision, which can lay a foundation for BCI application. Acknowledgement: This work was in part supported by NEURONET and a SARIF grant from the University of Tennessee. Reference: [1] Bakardjian, Hovagim, Toshihisa Tanaka, and Andrzej Cichocki. "Optimization of SSVEP brain responses with application to eight-command Brain-Computer Interface." Neuroscience letters 469.1 (2010): 34-38. [2] Zhang, Y. U., et al. "Frequency recognition in SSVEP-based BCI using multiset canonical correlation analysis." International journal of neural systems 24.04 (2014): 1450013. [3] Bradberry, Trent J., Rodolphe J. Gentili, and José L. Contreras-Vidal. "Fast attainment of computer cursor control with noninvasively acquired brain signals." Journal of neural engineering 8.3 (2011): 036010

    Convolutional Neural Networks for a Cursor Control Brain Computer Interface

    No full text
    International audienceA Brain-Computer Interface (BCI) platform can be utilized by a patient to control an external device without making any overt movements. This can be beneficial to a variety of patients who suffer from paralysis, loss of limb, or neurodegenerative diseases. We decode brain signals using EEG during imagined body kinematics to control an on-screen cursor. Convolutional neural networks (CNNs) are already a popular choice for image-based learning problems and are useful in EEG applications. The major advantage of CNNs is that they can generate features from the signal automatically and do not require as much domain driven feature engineering as a traditional machine learning approach. We implement a CNN to perform multivariate regression over the EEG signal to predict intended cursor velocity

    Convolutional Neural Networks for a Cursor Control Brain Computer Interface

    No full text
    International audienceA Brain-Computer Interface (BCI) platform can be utilized by a patient to control an external device without making any overt movements. This can be beneficial to a variety of patients who suffer from paralysis, loss of limb, or neurodegenerative diseases. We decode brain signals using EEG during imagined body kinematics to control an on-screen cursor. Convolutional neural networks (CNNs) are already a popular choice for image-based learning problems and are useful in EEG applications. The major advantage of CNNs is that they can generate features from the signal automatically and do not require as much domain driven feature engineering as a traditional machine learning approach. We implement a CNN to perform multivariate regression over the EEG signal to predict intended cursor velocity

    Decoding Visual Attentional State using EEG-based BCI

    No full text
    International audienceVisual attention facilitates processing visual input by rapidly focusing on perceptually salient information and at the same time ruling out irrelevant information. We developed a Brain Computer Interface (BCI) platform to decode brainwave patterns during sustained attention and collected scalp electroencephalography (EEG) signals from the whole brain in real time using a wireless headset. Concurrently, we collected behavioral data. Our experimental materials included a series of composite images which were made by combining a scene (i.e., indoor vs outdoor) image and a face (i.e., female vs male) image. Luminance values of all images were initially equated in terms of mean and standard deviation. There were four blocks, each comprising of 50 composite images. Two blocks began with priming faces and the other two blocks began with priming scenes. During the experiment, each participant was asked to press buttons on keyboard to distinguish between indoor and outdoor scene subcategories in blocks primed with scenes and discriminate between male and female face subcategories in blocks primed with faces. We developed an individualized model using machine learning techniques to decode visual attention based on EEG signals. Our model demonstrates an instantaneous visual attention towards face and scene categories. So far, six adult participants have partaken in the study. Having extracted EEG spectral and temporal features, we filtered out the most significant features using iterative step-wise feature reduction algorithm. The results show that the average decoding accuracy of our model is highly correlated with the behavioral data. The average behavioral response was about 85%. The average categorization between scene and face sets was 77%. Further, the EEG data accuracy is comparable to previous findings using functional magnetic resonance imaging (fMRI) [1]. Findings of the present study may have clinical implications in diagnosing attention deficit in early stages of dementia or Mild Cognitive Impairment (MCI) in elderly people [2] as well as Attention Deficit Hyperactivity Disorder (ADHD). Further, the platform may have potential applications in assessing visual attention and closed-loop brainwave regulation in future. Reference: [1] Cohen, Jonathan D., et al. "Closed-loop training of attention with real-time brain imaging." Nature neuroscience 18.3 (2015): 470. [2] Jiang, Yang, Reza Abiri, and Xiaopeng Zhao. "Tuning up the old brain with new tricks: attention training via neurofeedback." Frontiers in aging neuroscience 9 (2017): 52. Figure 1: Mean decoding accuracy and behavioral response for all subject

    Decoding Visual Attentional State using EEG-based BCI

    No full text
    International audienceVisual attention facilitates processing visual input by rapidly focusing on perceptually salient information and at the same time ruling out irrelevant information. We developed a Brain Computer Interface (BCI) platform to decode brainwave patterns during sustained attention and collected scalp electroencephalography (EEG) signals from the whole brain in real time using a wireless headset. Concurrently, we collected behavioral data. Our experimental materials included a series of composite images which were made by combining a scene (i.e., indoor vs outdoor) image and a face (i.e., female vs male) image. Luminance values of all images were initially equated in terms of mean and standard deviation. There were four blocks, each comprising of 50 composite images. Two blocks began with priming faces and the other two blocks began with priming scenes. During the experiment, each participant was asked to press buttons on keyboard to distinguish between indoor and outdoor scene subcategories in blocks primed with scenes and discriminate between male and female face subcategories in blocks primed with faces. We developed an individualized model using machine learning techniques to decode visual attention based on EEG signals. Our model demonstrates an instantaneous visual attention towards face and scene categories. So far, six adult participants have partaken in the study. Having extracted EEG spectral and temporal features, we filtered out the most significant features using iterative step-wise feature reduction algorithm. The results show that the average decoding accuracy of our model is highly correlated with the behavioral data. The average behavioral response was about 85%. The average categorization between scene and face sets was 77%. Further, the EEG data accuracy is comparable to previous findings using functional magnetic resonance imaging (fMRI) [1]. Findings of the present study may have clinical implications in diagnosing attention deficit in early stages of dementia or Mild Cognitive Impairment (MCI) in elderly people [2] as well as Attention Deficit Hyperactivity Disorder (ADHD). Further, the platform may have potential applications in assessing visual attention and closed-loop brainwave regulation in future. Reference: [1] Cohen, Jonathan D., et al. "Closed-loop training of attention with real-time brain imaging." Nature neuroscience 18.3 (2015): 470. [2] Jiang, Yang, Reza Abiri, and Xiaopeng Zhao. "Tuning up the old brain with new tricks: attention training via neurofeedback." Frontiers in aging neuroscience 9 (2017): 52. Figure 1: Mean decoding accuracy and behavioral response for all subject
    corecore