26 research outputs found

    AJILE Movement Prediction: Multimodal Deep Learning for Natural Human Neural Recordings and Video

    Full text link
    Developing useful interfaces between brains and machines is a grand challenge of neuroengineering. An effective interface has the capacity to not only interpret neural signals, but predict the intentions of the human to perform an action in the near future; prediction is made even more challenging outside well-controlled laboratory experiments. This paper describes our approach to detect and to predict natural human arm movements in the future, a key challenge in brain computer interfacing that has never before been attempted. We introduce the novel Annotated Joints in Long-term ECoG (AJILE) dataset; AJILE includes automatically annotated poses of 7 upper body joints for four human subjects over 670 total hours (more than 72 million frames), along with the corresponding simultaneously acquired intracranial neural recordings. The size and scope of AJILE greatly exceeds all previous datasets with movements and electrocorticography (ECoG), making it possible to take a deep learning approach to movement prediction. We propose a multimodal model that combines deep convolutional neural networks (CNN) with long short-term memory (LSTM) blocks, leveraging both ECoG and video modalities. We demonstrate that our models are able to detect movements and predict future movements up to 800 msec before movement initiation. Further, our multimodal movement prediction models exhibit resilience to simulated ablation of input neural signals. We believe a multimodal approach to natural neural decoding that takes context into account is critical in advancing bioelectronic technologies and human neuroscience

    BRAIN ACTIVITIES FOR MOTOR MOVEMENT

    Get PDF
    Brain Computer Interface (BCI) is hardware and software system which allows interaction between the human’s brain and some surrounding activities without depending on their muscles or peripheral nerves. The main objectives of this project are to design a brain computer interface algorithm that takes Electroencephalography(EEG) signals as its input, translates them into commands for movement control and to test the performance of the designed algorithm on human subjects. The research covers the procedure of designing the BCI algorithm and this consists of three stages firstly recording EEG brain signals, secondly EEG signals pre-processing, Last stage is EEG signals classification. The EEG signals classification is divided into 2 parts which includes feature extraction and feature classification. Multivariate adaptive auto regressive (MVAAR) method is used in the feature extraction part because it is suitable for motor imaginary. Feature vectors are used to differentiate the different brain activity signals associated with the user’s attention, Linear Discriminate Linear (LDA) method is used in feature classification step to achieve these goals. The Feature extraction method MVAAR couldn’t extract the actual feature for the four movements so we couldn’t classify between them

    Real-Time Decision Fusion for Multimodal Neural Prosthetic Devices

    Get PDF
    The field of neural prosthetics aims to develop prosthetic limbs with a brain-computer interface (BCI) through which neural activity is decoded into movements. A natural extension of current research is the incorporation of neural activity from multiple modalities to more accurately estimate the user's intent. The challenge remains how to appropriately combine this information in real-time for a neural prosthetic device., i.e., fusing predictions from several single-modality decoders to produce a more accurate device state estimate. We examine two algorithms for continuous variable decision fusion: the Kalman filter and artificial neural networks (ANNs). Using simulated cortical neural spike signals, we implemented several successful individual neural decoding algorithms, and tested the capabilities of each fusion method in the context of decoding 2-dimensional endpoint trajectories of a neural prosthetic arm. Extensively testing these methods on random trajectories, we find that on average both the Kalman filter and ANNs successfully fuse the individual decoder estimates to produce more accurate predictions.Our results reveal that a fusion-based approach has the potential to improve prediction accuracy over individual decoders of varying quality, and we hope that this work will encourage multimodal neural prosthetics experiments in the future

    Real-Time Decoding of Brain Responses to Visuospatial Attention Using 7T fMRI

    Get PDF
    Brain-Computer interface technologies mean to create new communication channels between our mind and our environment, independent of the motor system, by detecting and classifying self regulation of local brain activity. BCIs can provide patients with severe paralysis a means to communicate and to live more independent lives. There has been a growing interest in using invasive recordings for BCI to improve the signal quality. This also potentially gives access to new control strategies previously inaccessible by non-invasive methods. However, before surgery, the best implantation site needs to be determined. The blood-oxygen-level dependent signal changes measured with fMRI have been shown to agree well spatially with those found with invasive electrodes, and are the best option for pre-surgical localization. We show, using real-time fMRI at 7T, that eye movement-independent visuospatial attention can be used as a reliable control strategy for BCIs. At this field strength even subtle signal changes can be detected in single trials thanks to the high contrast-to-noise ratio. A group of healthy subjects were instructed to move their attention between three (two peripheral and one central) spatial target regions while keeping their gaze fixated at the center. The activated regions were first located and thereafter the subjects were given real-time feedback based on the activity in these regions. All subjects managed to regulate local brain areas without training, which suggests that visuospatial attention is a promising new target for intracranial BCI. ECoG data recorded from one epilepsy patient showed that local changes in gamma-power can be used to separate the three classes

    BRAIN ACTIVITIES FOR MOTOR MOVEMENT

    Get PDF
    Brain Computer Interface (BCI) is hardware and software system which allows interaction between the human’s brain and some surrounding activities without depending on their muscles or peripheral nerves. The main objectives of this project are to design a brain computer interface algorithm that takes Electroencephalography(EEG) signals as its input, translates them into commands for movement control and to test the performance of the designed algorithm on human subjects. The research covers the procedure of designing the BCI algorithm and this consists of three stages firstly recording EEG brain signals, secondly EEG signals pre-processing, Last stage is EEG signals classification. The EEG signals classification is divided into 2 parts which includes feature extraction and feature classification. Multivariate adaptive auto regressive (MVAAR) method is used in the feature extraction part because it is suitable for motor imaginary. Feature vectors are used to differentiate the different brain activity signals associated with the user’s attention, Linear Discriminate Linear (LDA) method is used in feature classification step to achieve these goals. The Feature extraction method MVAAR couldn’t extract the actual feature for the four movements so we couldn’t classify between them

    Are Brain-Computer Interfaces Feasible withIntegrated Photonic Chips?

    Get PDF
    The present paper examines the viability of a radically novel idea for brain-computer interface (BCI), which could lead to novel technological, experimental and clinical applications. BCIs are computer-based systems that enable either one-way or two-way communication between a living brain and an external machine. BCIs read-out brain signals and transduce them into task commands, which are performed by a machine. In closed-loop the machine can stimulate the brain with appropriate signals. In recent years, it has been shown that there is some ultraweak light emission from neurons within or close to the visible and near-infrared parts of the optical spectrum. Such ultraweak photon emission (UPE) reflects the cellular (and body) oxidative status, and compelling pieces of evidence are beginning to emerge that UPE may well play an informational role in neuronal functions. In fact, several experiments point to a direct correlation between UPE intensity and neural activity, oxidative reactions, EEG activity, cerebral blood flow, cerebral energy metabolism, and release of glutamate. Therefore, we propose a novel skull implant BCI that uses UPE. We suggest that a photonic integrated chip installed on the interior surface of the skull may enable a new form of extraction of the relevant features from the UPE signals. In the current technology landsacepe, photonic technologies are advancing rapidly and poised to overtake many electrical technologies, due to their unique advantages, such as miniaturization, high speed, low thermal effects, and large integration capacity that allow for high yield, volume manufacturing, and lower cost. For our proposed BCI, we are making some very major conjectures, which need to be experimentally verified, and therefore we discuss the controversial parts, feasibility of technology and limitations, and potential impact of this envisaged technology if successfully implemented in the future.BERC.2018-2021 Severo Ochoa.SEV-2017-071

    MS

    Get PDF
    thesisBrain machine interfaces allow one to use one's thoughts to control the actions of a machine. To accomplish this, the interface must record and send brain signals to a signal processor to be decoded. The recording of brain signals such as the firing of individual nerves and electroencephalograms has been used for some time now to control machines. Recently it has been discovered that electrocorticograms are also a viable brain signal to control machines. This thesis covers the design and testing of the amplifiers and digital control for an integrated circuit that can record electrocorticograms and broadcast such data from multiple electrodes. The chip is powered wirelessly by an inductive link operating at 2.765 MHz. This same link uses amplitude modulation to send commands to the chip. Data can be collected from 100 electrodes. Each electrode is capacitively coupled to an amplifier. Each amplifier's output can be multiplexed to an ADC, digitized and then broadcast off chip via an RF transmitter. The chip was fabricated in a commercially available 0.6 (im, 2 poly, 3 metal BiCMOS process. The chip was tested, and all functions of the chip performed within their respective design tolerances. Specifically, the amplifier's bandwidth ranges from 0.05 Hz to a programmable high cut-off frequency of 79 Hz to 240 Hz. The amplifier has an electrode-referred noise of 3.5 jiW and requires 4.5 fiW of power. The transmission of data from multiple electrodes was also tested and it was found that individual electrode data could be reconstructed

    Rehabilitation of gait after stroke: a review towards a top-down approach

    Get PDF
    This document provides a review of the techniques and therapies used in gait rehabilitation after stroke. It also examines the possible benefits of including assistive robotic devices and brain-computer interfaces in this field, according to a top-down approach, in which rehabilitation is driven by neural plasticity
    corecore