38 research outputs found

    AJILE Movement Prediction: Multimodal Deep Learning for Natural Human Neural Recordings and Video

    Full text link
    Developing useful interfaces between brains and machines is a grand challenge of neuroengineering. An effective interface has the capacity to not only interpret neural signals, but predict the intentions of the human to perform an action in the near future; prediction is made even more challenging outside well-controlled laboratory experiments. This paper describes our approach to detect and to predict natural human arm movements in the future, a key challenge in brain computer interfacing that has never before been attempted. We introduce the novel Annotated Joints in Long-term ECoG (AJILE) dataset; AJILE includes automatically annotated poses of 7 upper body joints for four human subjects over 670 total hours (more than 72 million frames), along with the corresponding simultaneously acquired intracranial neural recordings. The size and scope of AJILE greatly exceeds all previous datasets with movements and electrocorticography (ECoG), making it possible to take a deep learning approach to movement prediction. We propose a multimodal model that combines deep convolutional neural networks (CNN) with long short-term memory (LSTM) blocks, leveraging both ECoG and video modalities. We demonstrate that our models are able to detect movements and predict future movements up to 800 msec before movement initiation. Further, our multimodal movement prediction models exhibit resilience to simulated ablation of input neural signals. We believe a multimodal approach to natural neural decoding that takes context into account is critical in advancing bioelectronic technologies and human neuroscience

    Unsupervised decoding of long-term, naturalistic human neural recordings with automated video and audio annotations

    Get PDF
    Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Most ongoing efforts have focused on training decoders on specific, stereotyped tasks in laboratory settings. Implementing brain-computer interfaces (BCIs) in natural settings requires adaptive strategies and scalable algorithms that require minimal supervision. Here we propose an unsupervised approach to decoding neural states from human brain recordings acquired in a naturalistic context. We demonstrate our approach on continuous long-term electrocorticographic (ECoG) data recorded over many days from the brain surface of subjects in a hospital room, with simultaneous audio and video recordings. We first discovered clusters in high-dimensional ECoG recordings and then annotated coherent clusters using speech and movement labels extracted automatically from audio and video recordings. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Our results show that our unsupervised approach can discover distinct behaviors from ECoG data, including moving, speaking and resting. We verify the accuracy of our approach by comparing to manual annotations. Projecting the discovered cluster centers back onto the brain, this technique opens the door to automated functional brain mapping in natural settings

    A Novel Neural Network Classifier for Brain Computer Interface

    Get PDF
    Brain computer interfaces (BCI) provides a non-muscular channel for controlling a device through electroencephalographic signals to perform different tasks. The BCI system records the Electro-encephalography (EEG) and detects specific patterns that initiate control commands of the device. The efficiency of the BCI depends upon the methods used to process the brain signals and classify various patterns of brain signal accurately to perform different tasks. Due to the presence of artifacts in the raw EEG signal, it is required to preprocess the signals for efficient feature extraction. In this paper it is proposed to implement a BCI system which extracts the EEG features using Discrete Cosine transforms. Also, two stages of filtering with the first stage being a butterworth filter and the second stage consisting of an moving average 15 point spencer filter has been used to remove random noise and at the same time maintaining a sharp step response. The classification of the signals is done using the proposed Semi Partial Recurrent Neural Network. The proposed method has very good classification accuracy compared to conventional neural network classifiers. Keywords: Brain Computer Interface (BCI), Electro Encephalography (EEG), Discrete Cosine transforms(DCT), Butterworth filters, Spencer filters, Semi Partial Recurrent Neural network, laguarre polynomia

    Tongue Driven System a wireless Assistance Technology

    Get PDF
    The "tongue drive system” is a tongue-operated assistive technology advanced for people with severe disability to manipulate their surroundings. The tongue is considered an amazing appendage in severely disabled people for working an assistive device. Tongue force consists of an array of hall-impact magnetic sensors to measure the magnetic discipline generated by way of a small permanent magnet secured at the tongue. [1] The sensor indicators are transmitted throughout a wi-fi link and processed to control the moves of a cursor on a computer display screen or to perform a powered wheelchair, a telephone, or different equipments. The foremost benefit of this technology is the possibility of capturing a big kind of tongue moves through processing a combination of sensor outputs. this would offer the person with a easy proportional manage rather than a switch primarily based on/off manipulate this is the premise of maximum existing technology. [2

    Brain-Computer Interfaces and Human-Computer Interaction

    Get PDF

    Decoding spoken words using local field potentials recorded from the cortical surface

    Get PDF
    Pathological conditions such as amyotrophic lateral sclerosis or damage to the brainstem can leave patients severely paralyzed but fully aware, in a condition known as 'locked-in syndrome'. Communication in this state is often reduced to selecting individual letters or words by arduous residual movements. More intuitive and rapid communication may be restored by directly interfacing with language areas of the cerebral cortex. We used a grid of closely spaced, nonpenetrating micro-electrodes to record local field potentials (LFPs) from the surface of face motor cortex and Wernicke's area. From these LFPs we were successful in classifying a small set of words on a trial-by-trial basis at levels well above chance. We found that the pattern of electrodes with the highest accuracy changed for each word, which supports the idea that closely spaced micro-electrodes are capable of capturing neural signals from independent neural processing assemblies. These results further support using cortical surface potentials (electrocorticography) in brain–computer interfaces. These results also show that LFPs recorded from the cortical surface (micro-electrocorticography) of language areas can be used to classify speech-related cortical rhythms and potentially restore communication to locked-in patients

    Decoding spoken words using local field potentials recorded from the cortical surface

    Get PDF
    Pathological conditions such as amyotrophic lateral sclerosis or damage to the brainstem can leave patients severely paralyzed but fully aware, in a condition known as 'locked-in syndrome'. Communication in this state is often reduced to selecting individual letters or words by arduous residual movements. More intuitive and rapid communication may be restored by directly interfacing with language areas of the cerebral cortex. We used a grid of closely spaced, nonpenetrating micro-electrodes to record local field potentials (LFPs) from the surface of face motor cortex and Wernicke's area. From these LFPs we were successful in classifying a small set of words on a trial-by-trial basis at levels well above chance. We found that the pattern of electrodes with the highest accuracy changed for each word, which supports the idea that closely spaced micro-electrodes are capable of capturing neural signals from independent neural processing assemblies. These results further support using cortical surface potentials (electrocorticography) in brain–computer interfaces. These results also show that LFPs recorded from the cortical surface (micro-electrocorticography) of language areas can be used to classify speech-related cortical rhythms and potentially restore communication to locked-in patients
    corecore