16 research outputs found

    A Brain-Computer Interface Augmented Reality Framework with Auto-Adaptive SSVEP Recognition

    Full text link
    Brain-Computer Interface (BCI) initially gained attention for developing applications that aid physically impaired individuals. Recently, the idea of integrating BCI with Augmented Reality (AR) emerged, which uses BCI not only to enhance the quality of life for individuals with disabilities but also to develop mainstream applications for healthy users. One commonly used BCI signal pattern is the Steady-state Visually-evoked Potential (SSVEP), which captures the brain's response to flickering visual stimuli. SSVEP-based BCI-AR applications enable users to express their needs/wants by simply looking at corresponding command options. However, individuals are different in brain signals and thus require per-subject SSVEP recognition. Moreover, muscle movements and eye blinks interfere with brain signals, and thus subjects are required to remain still during BCI experiments, which limits AR engagement. In this paper, we (1) propose a simple adaptive ensemble classification system that handles the inter-subject variability, (2) present a simple BCI-AR framework that supports the development of a wide range of SSVEP-based BCI-AR applications, and (3) evaluate the performance of our ensemble algorithm in an SSVEP-based BCI-AR application with head rotations which has demonstrated robustness to the movement interference. Our testing on multiple subjects achieved a mean accuracy of 80\% on a PC and 77\% using the HoloLens AR headset, both of which surpass previous studies that incorporate individual classifiers and head movements. In addition, our visual stimulation time is 5 seconds which is relatively short. The statistically significant results show that our ensemble classification approach outperforms individual classifiers in SSVEP-based BCIs

    Millisecond-Timescale Local Network Coding in the Rat Primary Somatosensory Cortex

    Get PDF
    Correlation among neocortical neurons is thought to play an indispensable role in mediating sensory processing of external stimuli. The role of temporal precision in this correlation has been hypothesized to enhance information flow along sensory pathways. Its role in mediating the integration of information at the output of these pathways, however, remains poorly understood. Here, we examined spike timing correlation between simultaneously recorded layer V neurons within and across columns of the primary somatosensory cortex of anesthetized rats during unilateral whisker stimulation. We used Bayesian statistics and information theory to quantify the causal influence between the recorded cells with millisecond precision. For each stimulated whisker, we inferred stable, whisker-specific, dynamic Bayesian networks over many repeated trials, with network similarity of 83.3±6% within whisker, compared to only 50.3±18% across whiskers. These networks further provided information about whisker identity that was approximately 6 times higher than what was provided by the latency to first spike and 13 times higher than what was provided by the spike count of individual neurons examined separately. Furthermore, prediction of individual neurons' precise firing conditioned on knowledge of putative pre-synaptic cell firing was 3 times higher than predictions conditioned on stimulus onset alone. Taken together, these results suggest the presence of a temporally precise network coding mechanism that integrates information across neighboring columns within layer V about vibrissa position and whisking kinetics to mediate whisker movement by motor areas innervated by layer V

    A deep convolutional visual encoding model of neuronal responses in the LGN

    No full text
    Abstract The Lateral Geniculate Nucleus (LGN) represents one of the major processing sites along the visual pathway. Despite its crucial role in processing visual information and its utility as one target for recently developed visual prostheses, it is much less studied compared to the retina and the visual cortex. In this paper, we introduce a deep learning encoder to predict LGN neuronal firing in response to different visual stimulation patterns. The encoder comprises a deep Convolutional Neural Network (CNN) that incorporates visual stimulus spatiotemporal representation in addition to LGN neuronal firing history to predict the response of LGN neurons. Extracellular activity was recorded in vivo using multi-electrode arrays from single units in the LGN in 12 anesthetized rats with a total neuronal population of 150 units. Neural activity was recorded in response to single-pixel, checkerboard and geometrical shapes visual stimulation patterns. Extracted firing rates and the corresponding stimulation patterns were used to train the model. The performance of the model was assessed using different testing data sets and different firing rate windows. An overall mean correlation coefficient between the actual and the predicted firing rates of 0.57 and 0.7 was achieved for the 10 ms and the 50 ms firing rate windows, respectively. Results demonstrate that the model is robust to variability in the spatiotemporal properties of the recorded neurons outperforming other examined models including the state-of-the-art Generalized Linear Model (GLM). The results indicate the potential of deep convolutional neural networks as viable models of LGN firing

    Object recognition and localization enhancement in visual prostheses: a real-time mixed reality simulation

    No full text
    Blindness is a main threat that affects the daily life activities of any human. Visual prostheses have been introduced to provide artificial vision to the blind with the aim of allowing them to restore confidence and independence. In this article, we propose an approach that involves four image enhancement techniques to facilitate object recognition and localization for visual prostheses users. These techniques are clip art representation of the objects, edge sharpening, corner enhancement and electrode dropout handling. The proposed techniques are tested in a real-time mixed reality simulation environment that mimics vision perceived by visual prostheses users. Twelve experiments were conducted to measure the performance of the participants in object recognition and localization. The experiments involved single objects, multiple objects and navigation. To evaluate the performance of the participants in objects recognition, we measure their recognition time, recognition accuracy and confidence level. For object localization, two metrics were used to measure the performance of the participants which are the grasping attempt time and the grasping accuracy. The results demonstrate that using all enhancement techniques simultaneously gives higher accuracy, higher confidence level and less time for recognizing and grasping objects in comparison to not applying the enhancement techniques or applying pair-wise combinations of them. Visual prostheses could benefit from the proposed approach to provide users with an enhanced perception

    In-Silico Development and Assessment of a Kalman Filter Motor Decoder for Prosthetic Hand Control

    No full text
    Up to 50% of amputees abandon their prostheses, partly due to rapid degradation of the control systems, which require frequent recalibration. The goal of this study was to develop a Kalman filter-based approach to decoding motoneuron activity to identify movement kinematics and thereby provide stable, long-term, accurate, real-time decoding. The Kalman filter-based decoder was examined via biologically varied datasets generated from a high-fidelity computational model of the spinal motoneuron pool. The estimated movement kinematics controlled a simulated MuJoCo prosthetic hand. This clear-box approach showed successful estimation of hand movements under eight varied physiological conditions with no retraining. The mean correlation coefficient of 0.98 and mean normalized root mean square error of 0.06 over these eight datasets provide proof of concept that this decoder would improve long-term integrity of performance while performing new, untrained movements. Additionally, the decoder operated in real-time (~0.3 ms). Further results include robust performance of the Kalman filter when re-trained to more severe post-amputation limitations in the type and number of motoneurons remaining. An additional analysis shows that the decoder achieves better accuracy when using the firing of individual motoneurons as input, compared to using aggregate pool firing. Moreover, the decoder demonstrated robustness to noise affecting both the trained decoder parameters and the decoded motoneuron activity. These results demonstrate the utility of a proof of concept Kalman filter decoder that can support prosthetics’ control systems to maintain accurate and stable real-time movement performance

    In-Silico Development and Assessment of a Kalman Filter Motor Decoder for Prosthetic Hand Control

    No full text
    Up to 50% of amputees abandon their prostheses, partly due to rapid degradation of the control systems, which require frequent recalibration. The goal of this study was to develop a Kalman filter-based approach to decoding motoneuron activity to identify movement kinematics and thereby provide stable, long-term, accurate, real-time decoding. The Kalman filter-based decoder was examined via biologically varied datasets generated from a high-fidelity computational model of the spinal motoneuron pool. The estimated movement kinematics controlled a simulated MuJoCo prosthetic hand. This clear-box approach showed successful estimation of hand movements under eight varied physiological conditions with no retraining. The mean correlation coefficient of 0.98 and mean normalized root mean square error of 0.06 over these eight datasets provide proof of concept that this decoder would improve long-term integrity of performance while performing new, untrained movements. Additionally, the decoder operated in real-time (~0.3 ms). Further results include robust performance of the Kalman filter when re-trained to more severe post-amputation limitations in the type and number of motoneurons remaining. An additional analysis shows that the decoder achieves better accuracy when using the firing of individual motoneurons as input, compared to using aggregate pool firing. Moreover, the decoder demonstrated robustness to noise affecting both the trained decoder parameters and the decoded motoneuron activity. These results demonstrate the utility of a proof of concept Kalman filter decoder that can support prosthetics’ control systems to maintain accurate and stable real-time movement performance

    Inferring neuronal functional connectivity using dynamic Bayesian networks

    No full text
    corecore