919 research outputs found

    Finding Kinematics-Driven Latent Neural States From Neuronal Population Activity for Motor Decoding

    Get PDF
    While intracortical brain-machine interfaces (BMIs) demonstrate feasibility to restore mobility to people with paralysis, it is still challenging to maintain high-performance decoding in clinical BMIs. One of the main obstacles for high-performance BMI is the noise-prone nature of traditional decoding methods that connect neural response explicitly with physical quantity, such as velocity. In contrast, the recent development of latent neural state model enables a robust readout of large-scale neuronal population activity contents. However, these latent neural states do not necessarily contain kinematic information useful for decoding. Therefore, this study proposes a new approach to finding kinematics-dependent latent factors by extracting latent factors' kinematics-dependent components using linear regression. We estimated these components from the population activity through nonlinear mapping. The proposed kinematics-dependent latent factors generate neural trajectories that discriminate latent neural states before and after the motion onset. We compared the decoding performance of the proposed analysis model with the results from other popular models. They are factor analysis (FA), Gaussian process factor analysis (GPFA), latent factor analysis via dynamical systems (LFADS), preferential subspace identification (PSID), and neuronal population firing rates. The proposed analysis model results in higher decoding accuracy than do the others (>17% improvement on average). Our approach may pave a new way to extract latent neural states specific to kinematic information from motor cortices, potentially improving decoding performance for online intracortical BMIs

    Neuronal assembly dynamics in supervised and unsupervised learning scenarios

    Get PDF
    The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system’s variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions

    A Sliced Inverse Regression (SIR) Decoding the Forelimb Movement from Neuronal Spikes in the Rat Motor Cortex

    Get PDF
    Several neural decoding algorithms have successfully converted brain signals into commands to control a computer cursor and prosthetic devices. A majority of decoding methods, such as population vector algorithms (PVA), optimal linear estimators (OLE), and neural networks (NN), are effective in predicting movement kinematics, including movement direction, speed and trajectory but usually require a large number of neurons to achieve desirable performance. This study proposed a novel decoding algorithm even with signals obtained from a smaller numbers of neurons. We adopted sliced inverse regression (SIR) to predict forelimb movement from single-unit activities recorded in the rat primary motor (M1) cortex in a water-reward lever-pressing task. SIR performed weighted principal component analysis (PCA) to achieve effective dimension reduction for nonlinear regression. To demonstrate the decoding performance, SIR was compared to PVA, OLE, and NN. Furthermore, PCA and sequential feature selection (SFS) which are popular feature selection techniques were implemented for comparison of feature selection effectiveness. Among SIR, PVA, OLE, PCA, SFS, and NN decoding methods, the trajectories predicted by SIR (with a root mean square error, RMSE, of 8.47 ± 1.32 mm) was closer to the actual trajectories compared with those predicted by PVA (30.41 ± 11.73 mm), OLE (20.17 ± 6.43 mm), PCA (19.13 ± 0.75 mm), SFS (22.75 ± 2.01 mm), and NN (16.75 ± 2.02 mm). The superiority of SIR was most obvious when the sample size of neurons was small. We concluded that SIR sorted the input data to obtain the effective transform matrices for movement prediction, making it a robust decoding method for conditions with sparse neuronal information

    Decoding Lower Limb Muscle Activity and Kinematics from Cortical Neural Spike Trains during Monkey Performing Stand and Squat Movements

    Get PDF
    abstract: Extensive literatures have shown approaches for decoding upper limb kinematics or muscle activity using multichannel cortical spike recordings toward brain machine interface (BMI) applications. However, similar topics regarding lower limb remain relatively scarce. We previously reported a system for training monkeys to perform visually guided stand and squat tasks. The current study, as a follow-up extension, investigates whether lower limb kinematics and muscle activity characterized by electromyography (EMG) signals during monkey performing stand/squat movements can be accurately decoded from neural spike trains in primary motor cortex (M1). Two monkeys were used in this study. Subdermal intramuscular EMG electrodes were implanted to 8 right leg/thigh muscles. With ample data collected from neurons from a large brain area, we performed a spike triggered average (SpTA) analysis and got a series of density contours which revealed the spatial distributions of different muscle-innervating neurons corresponding to each given muscle. Based on the guidance of these results, we identified the locations optimal for chronic electrode implantation and subsequently carried on chronic neural data recordings. A recursive Bayesian estimation framework was proposed for decoding EMG signals together with kinematics from M1 spike trains. Two specific algorithms were implemented: a standard Kalman filter and an unscented Kalman filter. For the latter one, an artificial neural network was incorporated to deal with the nonlinearity in neural tuning. High correlation coefficient and signal to noise ratio between the predicted and the actual data were achieved for both EMG signals and kinematics on both monkeys. Higher decoding accuracy and faster convergence rate could be achieved with the unscented Kalman filter. These results demonstrate that lower limb EMG signals and kinematics during monkey stand/squat can be accurately decoded from a group of M1 neurons with the proposed algorithms. Our findings provide new insights for extending current BMI design concepts and techniques on upper limbs to lower limb circumstances. Brain controlled exoskeleton, prostheses or neuromuscular electrical stimulators for lower limbs are expected to be developed, which enables the subject to manipulate complex biomechatronic devices with mind in more harmonized manner.View the article as published at http://journal.frontiersin.org/article/10.3389/fnins.2017.00044/ful

    Dynamic models of brain imaging data and their Bayesian inversion

    Get PDF
    This work is about understanding the dynamics of neuronal systems, in particular with respect to brain connectivity. It addresses complex neuronal systems by looking at neuronal interactions and their causal relations. These systems are characterized using a generic approach to dynamical system analysis of brain signals - dynamic causal modelling (DCM). DCM is a technique for inferring directed connectivity among brain regions, which distinguishes between a neuronal and an observation level. DCM is a natural extension of the convolution models used in the standard analysis of neuroimaging data. This thesis develops biologically constrained and plausible models, informed by anatomic and physiological principles. Within this framework, it uses mathematical formalisms of neural mass, mean-field and ensemble dynamic causal models as generative models for observed neuronal activity. These models allow for the evaluation of intrinsic neuronal connections and high-order statistics of neuronal states, using Bayesian estimation and inference. Critically it employs Bayesian model selection (BMS) to discover the best among several equally plausible models. In the first part of this thesis, a two-state DCM for functional magnetic resonance imaging (fMRI) is described, where each region can model selective changes in both extrinsic and intrinsic connectivity. The second part is concerned with how the sigmoid activation function of neural-mass models (NMM) can be understood in terms of the variance or dispersion of neuronal states. The third part presents a mean-field model (MFM) for neuronal dynamics as observed with magneto- and electroencephalographic data (M/EEG). In the final part, the MFM is used as a generative model in a DCM for M/EEG and compared to the NMM using Bayesian model selection

    Learning biological neuronal networks with artificial neural networks: neural oscillations

    Full text link
    First-principles-based modelings have been extremely successful in providing crucial insights and predictions for complex biological functions and phenomena. However, they can be hard to build and expensive to simulate for complex living systems. On the other hand, modern data-driven methods thrive at modeling many types of high-dimensional and noisy data. Still, the training and interpretation of these data-driven models remain challenging. Here, we combine the two types of methods to model stochastic neuronal network oscillations. Specifically, we develop a class of first-principles-based artificial neural networks to provide faithful surrogates to the high-dimensional, nonlinear oscillatory dynamics produced by neural circuits in the brain. Furthermore, when the training data set is enlarged within a range of parameter choices, the artificial neural networks become generalizable to these parameters, covering cases in distinctly different dynamical regimes. In all, our work opens a new avenue for modeling complex neuronal network dynamics with artificial neural networks.Comment: 18 pages, 8 figure
    corecore