4 research outputs found

    Decoding sensorimotor information from superior parietal lobule of macaque via Convolutional Neural Networks

    Get PDF
    Despite the well-recognized role of the posterior parietal cortex (PPC) in processing sensory information to guide action, the differential encoding properties of this dynamic processing, as operated by different PPC brain areas, are scarcely known. Within the monkey's PPC, the superior parietal lobule hosts areas V6A, PEc, and PE included in the dorso-medial visual stream that is specialized in planning and guiding reaching movements. Here, a Convolutional Neural Network (CNN) approach is used to investigate how the information is processed in these areas. We trained two macaque monkeys to perform a delayed reaching task towards 9 positions (distributed on 3 different depth and direction levels) in the 3D peripersonal space. The activity of single cells was recorded from V6A, PEc, PE and fed to convolutional neural networks that were designed and trained to exploit the temporal structure of neuronal activation patterns, to decode the target positions reached by the monkey. Bayesian Optimization was used to define the main CNN hyper-parameters. In addition to discrete positions in space, we used the same network architecture to decode plausible reaching trajectories. We found that data from the most caudal V6A and PEc areas outperformed PE area in the spatial position decoding. In all areas, decoding accuracies started to increase at the time the target to reach was instructed to the monkey, and reached a plateau at movement onset. The results support a dynamic encoding of the different phases and properties of the reaching movement differentially distributed over a network of interconnected areas. This study highlights the usefulness of neurons' firing rate decoding via CNNs to improve our understanding of how sensorimotor information is encoded in PPC to perform reaching movements. The obtained results may have implications in the perspective of novel neuroprosthetic devices based on the decoding of these rich signals for faithfully carrying out patient's intentions.(C) 2022 Published by Elsevier Ltd

    Stabilization of Brain-Machine Interface Systems via Alignment to Baseline

    Get PDF
    Research in the brain-machine interface has the potential to transform the lives of individuals with limited motor capabilities to allow for greater independence. By directly accessing signals in the brain, it is possible to train a decoder to identify intended motion and allow the user to control a prosthetic limb or computer cursor by simply thinking about the motion. However, neural data recorded from implanted electrodes is highly unstable over time and across multiple sessions, leading to a severe drop in decoding performance as the test data becomes more distant from the data on which the decoder was trained. Here, we investigate a method to stabilize neural spike data from human trials of a center-out cursor control task before it is passed to a linear decoder, using the techniques of factor analysis and Procrustes alignment. We find that for highly variable human neural data from experiment dates that are far apart, the method does not help the decoder better predict cursor kinematics. However, when factor analysis weights are averaged over multiple baseline days, the performance of the decoder significantly increases with Procrustes alignment, which gives a promising method to limit recalibration and retraining of neural decoders by prolonging their higher accuracy performance over time

    Decoding Kinematics from Human Parietal Cortex using Neural Networks

    No full text
    Brain-machine interfaces have shown promising results in providing control over assistive devices for paralyzed patients. In this work we describe a BMI system using electrodes implanted in the parietal lobe of a tetraplegic subject. Neural data used for the decoding was recorded in five 3-minute blocks during the same session. Within each block, the subject uses motor imagery to control a cursor in a 2D center-out task. We compare performance for four different algorithms: Kalman filter, a two-layer Deep Neural Network (DNN), a Recurrent Neural Network (RNN) with SimpleRNN unit cell (SimpleRNN), and a RNN with Long-Short-Term Memory (LSTM) unit cell. The decoders achieved Pearson Correlation Coefficients (ρ) of 0.48, 0.39, 0.77 and 0.75, respectively, in the Y-coordinate, and 0.24, 0.20, 0.46 and 0.47, respectively, in the X-coordinate

    Interpretable Convolutional Neural Networks for Decoding and Analyzing Neural Time Series Data

    Get PDF
    Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity
    corecore