292 research outputs found

    Efficient human-machine control with asymmetric marginal reliability input devices

    Get PDF
    Input devices such as motor-imagery brain-computer interfaces (BCIs) are often unreliable. In theory, channel coding can be used in the human-machine loop to robustly encapsulate intention through noisy input devices but standard feedforward error correction codes cannot be practically applied. We present a practical and general probabilistic user interface for binary input devices with very high noise levels. Our approach allows any level of robustness to be achieved, regardless of noise level, where reliable feedback such as a visual display is available. In particular, we show efficient zooming interfaces based on feedback channel codes for two-class binary problems with noise levels characteristic of modalities such as motor-imagery based BCI, with accuracy <75%. We outline general principles based on separating channel, line and source coding in human-machine loop design. We develop a novel selection mechanism which can achieve arbitrarily reliable selection with a noisy two-state button. We show automatic online adaptation to changing channel statistics, and operation without precise calibration of error rates. A range of visualisations are used to construct user interfaces which implicitly code for these channels in a way that it is transparent to users. We validate our approach with a set of Monte Carlo simulations, and empirical results from a human-in-the-loop experiment showing the approach operates effectively at 50-70% of the theoretical optimum across a range of channel conditions

    Analysis of sensorimotor rhythms based on lower-limbs motor imagery for brain-computer interface

    Get PDF
    Over recent years significant advancements in the field of assistive technologies have been observed. One of the paramount needs for the development and advancement that urged researchers to contribute in the field other than congenital or diagnosed chronic disorders, is the rising number of affectees from accidents, natural calamity (due to climate change), or warfare, worldwide resulting in spinal cord injuries (SCI), neural disorder, or amputation (interception) of limbs, that impede a human to live a normal life. In addition to this, more than ten million people in the world are living with some form of handicap due to the central nervous system (CNS) disorder, which is precarious. Biomedical devices for rehabilitation are the center of research focus for many years. For people with lost motor control, or amputation, but unscathed sensory control, instigation of control signals from the source, i.e. electrophysiological signals, is vital for seamless control of assistive biomedical devices. Control signals, i.e. motion intentions, arouse    in the sensorimotor cortex of the brain that can be detected using invasive or non-invasive modality. With non-invasive modality, the electroencephalography (EEG) is used to record these motion intentions encoded in electrical activity of the cortex, and are deciphered to recognize user intent for locomotion. They are further transferred to the actuator, or end effector of the assistive device for control purposes. This can be executed via the brain-computer interface (BCI) technology. BCI is an emerging research field that establishes a real-time bidirectional connection between the human brain and a computer/output device. Amongst its diverse applications, neurorehabilitation to deliver sensory feedback and brain controlled biomedical devices for rehabilitation are most popular. While substantial literature on control of upper-limb assistive technologies controlled via BCI is there, less is known about the lower-limb (LL) control of biomedical devices for navigation or gait assistance via BCI. The types  of EEG signals compatible with an independent BCI are the oscillatory/sensorimotor rhythms (SMR) and event-related potential (ERP). These signals have successfully been used in BCIs for navigation control of assistive devices. However, ERP paradigm accounts for a voluminous setup for stimulus presentation to the user during operation of BCI assistive device. Contrary to this, the SMR does not require large setup for activation of cortical activity; it instead depends on the motor imagery (MI) that is produced synchronously or asynchronously by the user. MI is a covert cognitive process also termed kinaesthetic motor imagery (KMI) and elicits clearly after rigorous training trials, in form of event-related desynchronization (ERD) or synchronization (ERS), depending on imagery activity or resting period. It usually comprises of limb movement tasks, but is not limited to it in a BCI paradigm. In order to produce detectable features that correlate to the user¿s intent, selection of cognitive task is an important aspect to improve the performance of a BCI. MI used in BCI predominantly remains associated with the upper- limbs, particularly hands, due to the somatotopic organization of the motor cortex. The hand representation area is substantially large, in contrast to the anatomical location of the LL representation areas in the human sensorimotor cortex. The LL area is located within the interhemispheric fissure, i.e. between the mesial walls of both hemispheres of the cortex. This makes it arduous to detect EEG features prompted upon imagination of LL. Detailed investigation of the ERD/ERS in the mu and beta oscillatory rhythms during left and right LL KMI tasks is required, as the user¿s intent to walk is of paramount importance associated to everyday activity. This is an important area of research, followed by the improvisation of the already existing rehabilitation system that serves the LL affectees. Though challenging, solution to these issues is also imperative for the development of robust controllers that follow the asynchronous BCI paradigms to operate LL assistive devices seamlessly. This thesis focusses on the investigation of cortical lateralization of ERD/ERS in the SMR, based on foot dorsiflexion KMI and knee extension KMI separately. This research infers the possibility to deploy these features in real-time BCI by finding maximum possible classification accuracy from the machine learning (ML) models. EEG signal is non-stationary, as it is characterized by individual-to-individual and trial-to-trial variability, and a low signal-to-noise ratio (SNR), which is challenging. They are high in dimension with relatively low number of samples available for fitting ML models to the data. These factors account for ML methods that were developed into the tool of choice  to analyse single-trial EEG data. Hence, the selection of appropriate ML model for true detection of class label with no tradeoff of overfitting is crucial. The feature extraction part of the thesis constituted of testing the band-power (BP) and the common spatial pattern (CSP) methods individually. The study focused on the synchronous BCI paradigm. This was to ensure the exhibition of SMR for the possibility of a practically viable control system in a BCI. For the left vs. right foot KMI, the objective was to distinguish the bilateral tasks, in order to use them as unilateral commands in a 2-class BCI for controlling/navigating a robotic/prosthetic LL for rehabilitation. Similar was the approach for left-right knee KMI. The research was based on four main experimental studies. In addition to the four studies, the research is also inclusive of the comparison of intra-cognitive tasks within the same limb, i.e. left foot vs. left knee and right foot vs. right knee tasks, respectively (Chapter 4). This added to another novel contribution towards the findings based on comparison of different tasks within the same LL. It provides basis to increase the dimensionality of control signals within one BCI paradigm, such as a BCI-controlled LL assistive device with multiple degrees of freedom (DOF) for restoration of locomotion function. This study was based on analysis of statistically significant mu ERD feature using BP feature extraction method. The first stage of this research comprised of the left vs. right foot KMI tasks, wherein the ERD/ERS that elicited in the mu-beta rhythms were analysed using BP feature extraction method (Chapter 5). Three individual features, i.e. mu ERD, beta ERD, and beta ERS were investigated on EEG topography and time-frequency (TF) maps, and average time course of power percentage, using the common average reference and bipolar reference methods. A comparative study was drawn for both references to infer the optimal method. This was followed by ML, i.e. classification of the three feature vectors (mu ERD, beta ERD, and beta ERS), using linear discriminant analysis (LDA), support vector machine (SVM), and k-nearest neighbour (KNN) algorithms, separately. Finally, the multiple correction statistical tests were done, in order to predict maximum possible classification accuracy amongst all paradigms for the most significant feature. All classifier models were supported with the statistical techniques of k-fold cross validation and evaluation of area under receiver-operator characteristic curves (AUC-ROC) for prediction of the true class label. The highest classification accuracy of 83.4% ± 6.72 was obtained with KNN model for beta ERS feature. The next study was based on enhancing the classification accuracy obtained from previous study. It was based on using similar cognitive tasks as study in Chapter 5, however deploying different methodology for feature extraction and classification procedure. In the second study, ERD/ERS from mu and beta rhythms were extracted using CSP and filter bank common spatial pattern (FBCSP) algorithms, to optimize the individual spatial patterns (Chapter 6). This was followed by ML process, for which the supervised logistic regression (Logreg) and LDA were deployed separately. Maximum classification accuracy resulted in 77.5% ± 4.23 with FBCSP feature vector and LDA model, with a maximum kappa coefficient of 0.55 that is in the moderate range of agreement between the two classes. The left vs. right foot discrimination results were nearly same, however the BP feature vector performed better than CSP. The third stage was based on the deployment of novel cognitive task of left vs. right knee extension KMI. Analysis of the ERD/ERS in the mu-beta rhythms was done for verification of cortical lateralization via BP feature vector (Chapter 7). Similar to Chapter 5, in this study the analysis of ERD/ERS features was done on the EEG topography and TF maps, followed by the determination of average time course and peak latency of feature occurrence. However, for this study, only mu ERD and beta ERS features were taken into consideration and the EEG recording method only comprised of common average reference. This was due to the established results from the foot study earlier, in Chapter 5, where beta ERD features showed less average amplitude. The LDA and KNN classification algorithms were employed. Unexpectedly, the left vs. right knee KMI reflected the highest accuracy of 81.04% ± 7.5 and an AUC-ROC = 0.84, strong enough to be used in a real-time BCI as two independent control features. This was using KNN model for beta ERS feature. The final study of this research followed the same paradigm as used in Chapter 6, but for left vs. right knee KMI cognitive task (Chapter 8). Primarily this study aimed at enhancing the resulting accuracy from Chapter 7, using CSP and FBCSP methods with Logreg and LDA models respectively. Results were in accordance with those of the already established foot KMI study, i.e. BP feature vector performed better than the CSP. Highest classification accuracy of 70.00% ± 2.85 with kappa score of 0.40 was obtained with Logreg using FBCSP feature vector. Results stipulated the utilization of ERD/ERS in mu and beta bands, as independent control features for discrimination of bilateral foot or the novel bilateral knee KMI tasks. Resulting classification accuracies implicate that any 2-class BCI, employing unilateral foot, or knee KMI, is suitable for real-time implementation. In conclusion, this thesis demonstrates the possible EEG pre-processing, feature extraction and classification methods to instigate a real-time BCI from the conducted studies. Following this, the critical aspects of latency in information transfer rate, SNR, and tradeoff between dimensionality and overfitting needs to be taken care of, during design of real-time BCI controller. It also highlights that there is a need for consensus over the development of standardized methods of cognitive tasks for MI based BCI. Finally, the application of wireless EEG for portable assistance is essential as it will contribute to lay the foundations of the development of independent asynchronous BCI based on SMR

    EEG-Based BCI Control Schemes for Lower-Limb Assistive-Robots

    Get PDF
    Over recent years, brain-computer interface (BCI) has emerged as an alternative communication system between the human brain and an output device. Deciphered intents, after detecting electrical signals from the human scalp, are translated into control commands used to operate external devices, computer displays and virtual objects in the real-time. BCI provides an augmentative communication by creating a muscle-free channel between the brain and the output devices, primarily for subjects having neuromotor disorders, or trauma to nervous system, notably spinal cord injuries (SCI), and subjects with unaffected sensorimotor functions but disarticulated or amputated residual limbs. This review identifies the potentials of electroencephalography (EEG) based BCI applications for locomotion and mobility rehabilitation. Patients could benefit from its advancements such as, wearable lower-limb (LL) exoskeletons, orthosis, prosthesis, wheelchairs, and assistive-robot devices. The EEG communication signals employed by the aforementioned applications that also provide feasibility for future development in the field are sensorimotor rhythms (SMR), event-related potentials (ERP) and visual evoked potentials (VEP). The review is an effort to progress the development of user's mental task related to LL for BCI reliability and confidence measures. As a novel contribution, the reviewed BCI control paradigms for wearable LL and assistive-robots are presented by a general control framework fitting in hierarchical layers. It reflects informatic interactions, between the user, the BCI operator, the shared controller, the robotic device and the environment. Each sub layer of the BCI operator is discussed in detail, highlighting the feature extraction, classification and execution methods employed by the various systems. All applications' key features and their interaction with the environment are reviewed for the EEG-based activity mode recognition, and presented in form of a table. It i

    A brain-computer interface integrated with virtual reality and robotic exoskeletons for enhanced visual and kinaesthetic stimuli

    Get PDF
    Brain-computer interfaces (BCI) allow the direct control of robotic devices for neurorehabilitation and measure brain activity patterns following the user’s intent. In the past two decades, the use of non-invasive techniques such as electroencephalography and motor imagery in BCI has gained traction. However, many of the mechanisms that drive the proficiency of humans in eliciting discernible signals for BCI remains unestablished. The main objective of this thesis is to explore and assess what improvements can be made for an integrated BCI-robotic system for hand rehabilitation. Chapter 2 presents a systematic review of BCI-hand robot systems developed from 2010 to late 2019 in terms of their technical and clinical reports. Around 30 studies were identified as eligible for review and among these, 19 were still in their prototype or pre-clinical stages of development. A degree of inferiority was observed from these systems in providing the necessary visual and kinaesthetic stimuli during motor imagery BCI training. Chapter 3 discusses the theoretical background to arrive at a hypothesis that an enhanced visual and kinaesthetic stimulus, through a virtual reality (VR) game environment and a robotic hand exoskeleton, will improve motor imagery BCI performance in terms of online classification accuracy, class prediction probabilities, and electroencephalography signals. Chapters 4 and 5 focus on designing, developing, integrating, and testing a BCI-VR-robot prototype to address the research aims. Chapter 6 tests the hypothesis by performing a motor imagery BCI paradigm self-experiment with an enhanced visual and kinaesthetic stimulus against a control. A significant increase (p = 0.0422) in classification accuracies is reported among groups with enhanced visual stimulus through VR versus those without. Six out of eight sessions among the VR groups have a median of class probability values exceeding a pre-set threshold value of 0.6. Finally, the thesis concludes in Chapter 7 with a general discussion on how these findings could suggest the role of new and emerging technologies such as VR and robotics in advancing BCI-robotic systems and how the contributions of this work may help improve the usability and accessibility of such systems, not only in rehabilitation but also in skills learning and education

    Error-related potentials for adaptive decoding and volitional control

    Get PDF
    Locked-in syndrome (LIS) is a condition characterized by total or near-total paralysis with preserved cognitive and somatosensory function. For the locked-in, brain-machine interfaces (BMI) provide a level of restored communication and interaction with the world, though this technology has not reached its fullest potential. Several streams of research explore improving BMI performance but very little attention has been given to the paradigms implemented and the resulting constraints imposed on the users. Learning new mental tasks, constant use of external stimuli, and high attentional and cognitive processing loads are common demands imposed by BMI. These paradigm constraints negatively affect BMI performance by locked-in patients. In an effort to develop simpler and more reliable BMI for those suffering from LIS, this dissertation explores using error-related potentials, the neural correlates of error awareness, as an access pathway for adaptive decoding and direct volitional control. In the first part of this thesis we characterize error-related local field potentials (eLFP) and implement a real-time decoder error detection (DED) system using eLFP while non-human primates controlled a saccade BMI. Our results show specific traits in the eLFP that bridge current knowledge of non-BMI evoked error-related potentials with error-potentials evoked during BMI control. Moreover, we successfully perform real-time DED via, to our knowledge, the first real-time LFP-based DED system integrated into an invasive BMI, demonstrating that error-based adaptive decoding can become a standard feature in BMI design. In the second part of this thesis, we focus on employing electroencephalography error-related potentials (ErrP) for direct volitional control. These signals were employed as an indicator of the user’s intentions under a closed-loop binary-choice robot reaching task. Although this approach is technically challenging, our results demonstrate that ErrP can be used for direct control via binary selection and, given the appropriate levels of task engagement and agency, single-trial closed-loop ErrP decoding is possible. Taken together, this work contributes to a deeper understanding of error-related potentials evoked during BMI control and opens new avenues of research for employing ErrP as a direct control signal for BMI. For the locked-in community, these advancements could foster the development of real-time intuitive brain-machine control

    Near-Infrared Spectroscopy for Brain Computer Interfacing

    Get PDF
    A brain-computer interface (BCI) gives those suffering from neuromuscular impairments a means to interact and communicate with their surrounding environment. A BCI translates physiological signals, typically electrical, detected from the brain to control an output device. A significant problem with current BCIs is the lengthy training periods involved for proficient usage, which can often lead to frustration and anxiety on the part of the user and may even lead to abandonment of the device. A more suitable and usable interface is needed to measure cognitive function more directly. In order to do this, new measurement modalities, signal acquisition and processing, and translation algorithms need to be addressed. This work implements a novel approach to BCI design, using noninvasive near-infrared spectroscopic (NIRS) techniques to develop a userfriendly optical BCI. NIRS is a practical non-invasive optical technique that can detect characteristic haemodynamic responses relating to neural activity. This thesis describes the use of NIRS to develop an accessible BCI system requiring very little user training. In harnessing the optical signal for BCI control an assessment of NIRS signal characteristics is carried out and detectable physiological effects are identified for BCI development. The investigations into various mental tasks for controlling the BCI show that motor imagery functions can be detected using NIRS. The optical BCI (OBCI) system operates in realtime characterising the occurrence of motor imagery functions, allowing users to control a switch - a “Mindswitch”. This work demonstrates the great potential of optical imaging methods for BCI development and brings to light an innovative approach to this field of research

    Detecting Command-Driven Brain Activity in Patients with Disorders of Consciousness Using TR-fNIRS

    Get PDF
    Vegetative state (VS) is a disorder of consciousness often referred to as “wakefulness without awareness”. Patients in this condition experience normal sleep-wake cycles, but lack all awareness of themselves and their surroundings. Clinically, assessing consciousness relies on behavioural tests to determine a patient’s ability to follow commands. This subjective approach often leads to a high rate of misdiagnosis (~40%) where patients who retain residual awareness are misdiagnosed as being in a VS. Recently, functional neuroimaging techniques such as functional magnetic resonance imaging (fMRI), has allowed researchers to use command-driven brain activity to infer consciousness. Although promising, the cost and accessibility of fMRI hinder its use for frequent examinations. Functional near-infrared spectroscopy (fNIRS) is an emerging optical technology that is a promising alternative to fMRI. The technology is safe, portable and inexpensive allowing for true bedside assessment of brain function. This thesis focuses on using time-resolved (TR) fNIRS, a variant of fNIRS with enhanced sensitivity to the brain, to detect brain function in healthy controls and patients with disorders of consciousness (DOC). Motor imagery (MI) was used to assess command-driven brain activity since this task has been extensively validated with fMRI. The feasibility of TR-fNIRS to detect MI activity was first assessed on healthy controls and fMRI was used for validation. The results revealed excellent agreement between the two techniques with an overall sensitivity of 93% in comparison to fMRI. Following these promising results, TR-fNIRS was used for rudimentary mental communication by using MI as affirmation to questions. Testing this approach on healthy controls revealed an overall accuracy of 76%. More interestingly, the same approach was used to communicate with a locked-in patient under intensive care. The patient had residual eye movement, which provided a unique opportunity to confirm the fNIRS results. The TR-fNIRS results were in full agreement with the eye responses, demonstrating for the first time the ability of fNIRS to communicate with a patient without prior training. Finally, this approach was used to assess awareness in DOC patients, revealing residual brain function in two patients who had also previously shown significant MI activity with fMRI

    ON THE INTERPLAY BETWEEN BRAIN-COMPUTER INTERFACES AND MACHINE LEARNING ALGORITHMS: A SYSTEMS PERSPECTIVE

    Get PDF
    Today, computer algorithms use traditional human-computer interfaces (e.g., keyboard, mouse, gestures, etc.), to interact with and extend human capabilities across all knowledge domains, allowing them to make complex decisions underpinned by massive datasets and machine learning. Machine learning has seen remarkable success in the past decade in obtaining deep insights and recognizing unknown patterns in complex data sets, in part by emulating how the brain performs certain computations. As we increase our understanding of the human brain, brain-computer interfaces can benefit from the power of machine learning, both as an underlying model of how the brain performs computations and as a tool for processing high-dimensional brain recordings. The technology (machine learning) has come full circle and is being applied back to understanding the brain and any electric residues of the brain activity over the scalp (EEG). Similarly, domains such as natural language processing, machine translation, and scene understanding remain beyond the scope of true machine learning algorithms and require human participation to be solved. In this work, we investigate the interplay between brain-computer interfaces and machine learning through the lens of end-user usability. Specifically, we propose the systems and algorithms to enable synergistic and user-friendly integration between computers (machine learning) and the human brain (brain-computer interfaces). In this context, we provide our research contributions in two interrelated aspects by, (i) applying machine learning to solve challenges with EEG-based BCIs, and (ii) enabling human-assisted machine learning with EEG-based human input and implicit feedback.Ph.D
    • …
    corecore