4 research outputs found
Nat Neurosci
It remains an open question how neural responses in motor cortex relate to movement. We explored the hypothesis that motor cortex reflects dynamics appropriate for generating temporally patterned outgoing commands. To formalize this hypothesis, we trained recurrent neural networks to reproduce the muscle activity of reaching monkeys. Models had to infer dynamics that could transform simple inputs into temporally and spatially complex patterns of muscle activity. Analysis of trained models revealed that the natural dynamical solution was a low-dimensional oscillator that generated the necessary multiphasic commands. This solution closely resembled, at both the single-neuron and population levels, what was observed in neural recordings from the same monkeys. Notably, data and simulations agreed only when models were optimized to find simple solutions. An appealing interpretation is that the empirically observed dynamics of motor cortex may reflect a simple solution to the problem of generating temporally patterned descending commands.R01 MH093338/MH/NIMH NIH HHS/United StatesR01NS076460/NS/NINDS NIH HHS/United StatesR01 MH93338-02/MH/NIMH NIH HHS/United StatesR01 NS076460/NS/NINDS NIH HHS/United StatesDP2 NS083037/NS/NINDS NIH HHS/United StatesDP1 HD075623/HD/NICHD NIH HHS/United States8DP1HD075623/DP/NCCDPHP CDC HHS/United States2016-11-17T00:00:00Z26075643PMC511329
Recommended from our members
Tensor Analysis and the Dynamics of Motor Cortex
Neural data often span multiple indices, such as neuron, experimental condition, trial, and time, resulting in a tensor or multidimensional array. Standard approaches to neural data analysis often rely on matrix factorization techniques, such as principal component analysis or nonnegative matrix factorization. Any inherent tensor structure in the data is lost when flattened into a matrix. Here, we analyze datasets from primary motor cortex from the perspective of tensor analysis, and develop a theory for how tensor structure relates to certain computational properties of the underlying system. Applied to the motor cortex datasets, we reveal that neural activity is best described by condition-independent dynamics as opposed to condition-dependent relations to external movement variables. Motivated by this result, we pursue one further tensor-related analysis, and two further dynamical systems-related analyses. First, we show how tensor decompositions can be used to denoise neural signals. Second, we apply system identification to the cortex- to-muscle transformation to reveal the intermediate spinal dynamics. Third, we fit recurrent neural networks to muscle activations and show that the geometric properties observed in motor cortex are naturally recapitulated in the network model. Taken together, these results emphasize (on the data analysis side) the role of tensor structure in data and (on the theoretical side) the role of motor cortex as a dynamical system
Recommended from our members
Neural Dynamics and the Geometry of Population Activity
A growing body of research indicates that much of the brain’s computation is invisible from the activity of individual neurons, but instead instantiated via population-level dynamics. According to this ‘dynamical systems hypothesis’, population-level neural activity evolves according to underlying dynamics that are shaped by network connectivity. While these dynamics are not directly observable in empirical data, they can be inferred by studying the structure of population trajectories. Quantification of this structure, the ‘trajectory geometry’, can then guide thinking on the underlying computation. Alternatively, modeling neural populations as dynamical systems can predict trajectory geometries appropriate for particular tasks. This approach of characterizing and interpreting trajectory geometry is providing new insights in many cortical areas, including regions involved in motor control and areas that mediate cognitive processes such as decision-making. In this thesis, I advance the characterization of population structure by introducing hypothesis-guided metrics for the quantification of trajectory geometry. These metrics, trajectory tangling in primary motor cortex and trajectory divergence in the Supplementary Motor Area, abstract away from task-specific solutions and toward underlying computations and network constraints that drive trajectory geometry.
Primate motor cortex (M1) projects to spinal interneurons and motoneurons, suggesting that motor cortex activity may be dominated by muscle-like commands. Observations during reaching lend support to this view, but evidence remains ambiguous and much debated. To provide a different perspective, we employed a novel behavioral paradigm that facilitates comparison between time-evolving neural and muscle activity. We found that single motor cortex neurons displayed many muscle-like properties, but the structure of population activity was not muscle-like. Unlike muscle activity, neural activity was structured to avoid ‘trajectory tangling’: moments where similar activity patterns led to dissimilar future patterns. Avoidance of trajectory tangling was present across tasks and species. Network models revealed a potential reason for this consistent feature: low trajectory tangling confers noise robustness. We were able to predict motor cortex activity from muscle activity by leveraging the hypothesis that muscle-like commands are embedded in additional structure that yields low trajectory tangling.
The Supplementary Motor Area (SMA) has been implicated in many higher-order aspects of motor control. Previous studies have demonstrated that SMA might track motor context. We propose that this computation necessitates that neural activity avoids ‘trajectory divergence’: moments where two similar neural states become dissimilar in the future. Indeed, we found that population activity in SMA, but not in M1, reliably avoided trajectory divergence, resulting in fundamentally different geometries: cyclical in M1 and helix-like in SMA. Analogous structure emerged in artificial networks trained without versus with context-related inputs. These findings reveal that the geometries of population activity in SMA and M1 are fundamentally different, with direct implications regarding what computations can be performed by each area.
The characterization and statistical analysis of trajectory geometry promises to advance our understanding of neural network function by providing interpretable, cohesive explanations for observed population structure. Commonality between individuals and networks can be uncovered and more generic, task-invariant, fundamental aspects of neural response can be explored
Stochastic modeling and control of neural and small length scale dynamical systems
Recent advancements in experimental and computational techniques have created tremendous opportunities in the study of fundamental questions of science and engineering by taking the approach of stochastic modeling and control of dynamical systems. Examples include but are not limited to neural coding and emergence of behaviors in biological networks. Integrating optimal control strategies with stochastic dynamical models has ignited the development of new technologies in many emerging applications. In this direction, particular examples are brain-machine interfaces (BMIs), and systems to manipulate submicroscopic objects. The focus of this dissertation is to advance these technologies by developing optimal control strategies under various feedback scenarios and system uncertainties. Brain-machine interfaces (BMIs) establish direct communications between living brain tissue and external devices such as an artificial arm. By sensing and interpreting neuronal activity to actuate an external device, BMI-based neuroprostheses hold great promise in rehabilitating motor disabled subjects such as amputees. However, lack of the incorporation of sensory feedback, such as proprioception and tactile information, from the artificial arm back to the brain has greatly limited the widespread clinical deployment of these neuroprosthetic systems in rehabilitation. In the first part of the dissertation, we develop a systematic control-theoretic approach for a system-level rigorous analysis of BMIs under various feedback scenarios. The approach involves quantitative and qualitative analysis of single neuron and network models to the design of missing sensory feedback pathways in BMIs using optimal feedback control theory. As a part of our results, we show that the recovery of the natural performance of motor tasks in BMIs can be achieved by designing artificial sensory feedbacks in the proposed optimal control framework. The second part of the dissertation deals with developing stochastic optimal control strategies using limited feedback information for applications in neural and small length scale dynamical systems. The stochastic nature of these systems coupled with the limited feedback information has greatly restricted the direct applicability of existing control strategies in stabilizing these systems. Moreover, it has recently been recognized that the development of advanced control algorithms is essential to facilitate applications in these systems. We propose a novel broadcast stochastic optimal control strategy in a receding horizon framework to overcome existing limitations of traditional control designs. We apply this strategy to stabilize multi-agent systems and Brownian ensembles. As a part of our results, we show the optimal trapping of an ensemble of particles driven by Brownian motion in a minimum trapping region using the proposed framework