7 research outputs found
Tensor Analysis Reveals Distinct Population Structure that Parallels the Different Computational Roles of Areas M1 and V1
Cortical firing rates frequently display elaborate and heterogeneous temporal structure. One often wishes to compute quantitative summaries of such structure-a basic example is the frequency spectrum-and compare with model-based predictions. The advent of large-scale population recordings affords the opportunity to do so in new ways, with the hope of distinguishing between potential explanations for why responses vary with time. We introduce a method that assesses a basic but previously unexplored form of population-level structure: when data contain responses across multiple neurons, conditions, and times, they are naturally expressed as a third-order tensor. We examined tensor structure for multiple datasets from primary visual cortex (V1) and primary motor cortex (M1). All V1 datasets were 'simplest' (there were relatively few degrees of freedom) along the neuron mode, while all M1 datasets were simplest along the condition mode. These differences could not be inferred from surface-level response features. Formal considerations suggest why tensor structure might differ across modes. For idealized linear models, structure is simplest across the neuron mode when responses reflect external variables, and simplest across the condition mode when responses reflect population dynamics. This same pattern was present for existing models that seek to explain motor cortex responses. Critically, only dynamical models displayed tensor structure that agreed with the empirical M1 data. These results illustrate that tensor structure is a basic feature of the data. For M1 the tensor structure was compatible with only a subset of existing models
Recommended from our members
Tensor Analysis Reveals Distinct Population Structure that Parallels the Different Computational Roles of Areas M1 and V1.
Cortical firing rates frequently display elaborate and heterogeneous temporal structure. One often wishes to compute quantitative summaries of such structure-a basic example is the frequency spectrum-and compare with model-based predictions. The advent of large-scale population recordings affords the opportunity to do so in new ways, with the hope of distinguishing between potential explanations for why responses vary with time. We introduce a method that assesses a basic but previously unexplored form of population-level structure: when data contain responses across multiple neurons, conditions, and times, they are naturally expressed as a third-order tensor. We examined tensor structure for multiple datasets from primary visual cortex (V1) and primary motor cortex (M1). All V1 datasets were 'simplest' (there were relatively few degrees of freedom) along the neuron mode, while all M1 datasets were simplest along the condition mode. These differences could not be inferred from surface-level response features. Formal considerations suggest why tensor structure might differ across modes. For idealized linear models, structure is simplest across the neuron mode when responses reflect external variables, and simplest across the condition mode when responses reflect population dynamics. This same pattern was present for existing models that seek to explain motor cortex responses. Critically, only dynamical models displayed tensor structure that agreed with the empirical M1 data. These results illustrate that tensor structure is a basic feature of the data. For M1 the tensor structure was compatible with only a subset of existing models
Data from: Tensor analysis reveals distinct population structure that parallels the different computational roles of areas M1 and V1
Cortical firing rates frequently display elaborate and heterogeneous temporal structure. One often wishes to compute quantitative summaries of such structure—a basic example is the frequency spectrum—and compare with model-based predictions. The advent of large-scale population recordings affords the opportunity to do so in new ways, with the hope of distinguishing between potential explanations for why responses vary with time. We introduce a method that assesses a basic but previously unexplored form of population-level structure: when data contain responses across multiple neurons, conditions, and times, they are naturally expressed as a third-order tensor. We examined tensor structure for multiple datasets from primary visual cortex (V1) and primary motor cortex (M1). All V1 datasets were ‘simplest’ (there were relatively few degrees of freedom) along the neuron mode, while all M1 datasets were simplest along the condition mode. These differences could not be inferred from surface-level response features. Formal considerations suggest why tensor structure might differ across modes. For idealized linear models, structure is simplest across the neuron mode when responses reflect external variables, and simplest across the condition mode when responses reflect population dynamics. This same pattern was present for existing models that seek to explain motor cortex responses. Critically, only dynamical models displayed tensor structure that agreed with the empirical M1 data. These results illustrate that tensor structure is a basic feature of the data. For M1 the tensor structure was compatible with only a subset of existing models
Dimensionality reduction beyond neural subspaces with slice tensor component analysis
Recent work has argued that large-scale neural recordings are often well described by patterns of coactivation across neurons. Yet the view that neural variability is constrained to a fixed, low-dimensional subspace may overlook higher-dimensional structure, including stereotyped neural sequences or slowly evolving latent spaces. Here we argue that task-relevant variability in neural data can also cofluctuate over trials or time, defining distinct ‘covariability classes’ that may co-occur within the same dataset. To demix these covariability classes, we develop sliceTCA (slice tensor component analysis), a new unsupervised dimensionality reduction method for neural data tensors. In three example datasets, including motor cortical activity during a classic reaching task in primates and recent multiregion recordings in mice, we show that sliceTCA can capture more task-relevant structure in neural data using fewer components than traditional methods. Overall, our theoretical framework extends the classic view of low-dimensional population activity by incorporating additional classes of latent variables capturing higher-dimensional structure
Recommended from our members
Neural Dynamics and the Geometry of Population Activity
A growing body of research indicates that much of the brain’s computation is invisible from the activity of individual neurons, but instead instantiated via population-level dynamics. According to this ‘dynamical systems hypothesis’, population-level neural activity evolves according to underlying dynamics that are shaped by network connectivity. While these dynamics are not directly observable in empirical data, they can be inferred by studying the structure of population trajectories. Quantification of this structure, the ‘trajectory geometry’, can then guide thinking on the underlying computation. Alternatively, modeling neural populations as dynamical systems can predict trajectory geometries appropriate for particular tasks. This approach of characterizing and interpreting trajectory geometry is providing new insights in many cortical areas, including regions involved in motor control and areas that mediate cognitive processes such as decision-making. In this thesis, I advance the characterization of population structure by introducing hypothesis-guided metrics for the quantification of trajectory geometry. These metrics, trajectory tangling in primary motor cortex and trajectory divergence in the Supplementary Motor Area, abstract away from task-specific solutions and toward underlying computations and network constraints that drive trajectory geometry.
Primate motor cortex (M1) projects to spinal interneurons and motoneurons, suggesting that motor cortex activity may be dominated by muscle-like commands. Observations during reaching lend support to this view, but evidence remains ambiguous and much debated. To provide a different perspective, we employed a novel behavioral paradigm that facilitates comparison between time-evolving neural and muscle activity. We found that single motor cortex neurons displayed many muscle-like properties, but the structure of population activity was not muscle-like. Unlike muscle activity, neural activity was structured to avoid ‘trajectory tangling’: moments where similar activity patterns led to dissimilar future patterns. Avoidance of trajectory tangling was present across tasks and species. Network models revealed a potential reason for this consistent feature: low trajectory tangling confers noise robustness. We were able to predict motor cortex activity from muscle activity by leveraging the hypothesis that muscle-like commands are embedded in additional structure that yields low trajectory tangling.
The Supplementary Motor Area (SMA) has been implicated in many higher-order aspects of motor control. Previous studies have demonstrated that SMA might track motor context. We propose that this computation necessitates that neural activity avoids ‘trajectory divergence’: moments where two similar neural states become dissimilar in the future. Indeed, we found that population activity in SMA, but not in M1, reliably avoided trajectory divergence, resulting in fundamentally different geometries: cyclical in M1 and helix-like in SMA. Analogous structure emerged in artificial networks trained without versus with context-related inputs. These findings reveal that the geometries of population activity in SMA and M1 are fundamentally different, with direct implications regarding what computations can be performed by each area.
The characterization and statistical analysis of trajectory geometry promises to advance our understanding of neural network function by providing interpretable, cohesive explanations for observed population structure. Commonality between individuals and networks can be uncovered and more generic, task-invariant, fundamental aspects of neural response can be explored
Linear Dynamics of Evidence Integration in Contextual Decision Making
Individual neurons in Prefrontal Cortex (PFC) exhibit a vast complexity in their responses. Central in Neuroscience is to understand how their collective activity underlies powerful computations responsible for higher order cognitive processes. In a recent study (Mante et al., 2013) two monkeys were trained to perform a contextual decision-making task, which required to selectively integrate the relevant evidence –either the color or the motion coherence of a random dots stimulus– and disregard the irrelevant one. A non-linear RNN trained to solve the same task found a solution that accounted for the selective integration computation, which could be understood by linearizing the dynamics of the network in each context. In this study, we took a different approach by explicitly fitting a Linear Dynamical System (LDS) model to the data from each context. We also fitted a novel jointly-factored linear model (JF), equivalent to the LDS but with no dynamical constraints and able to capture arbitrary patterns in time. Both models performed analogously, indicating that PFC data display systematic dynamics consistent with the LDS prior. Motion and color input signals were inferred and spanned independent subspaces. The input subspaces largely overlapped across contexts along dimensions that captured coherence and coherence magnitude related variance. The dynamics changed in each context so that relevant stimuli were strongly amplified. In one of the monkeys, however, the integrated color signal emerged via direct input modulation. The integration took place within subspaces spanned by multiple slow modes. These strongly overlapped along a single dimension across contexts, which was consistent with a globally identified decision axis. Interestingly, irrelevant inputs were not dynamically discarded, but were also integrated, although in a much lower extent. Finally, the model reproduced the main dynamical features of the population trajectories and accurately captured individual PSTHs. Our study suggests that a whole space of sensory-related input signals invariantly modulates PFC responses and that decision signals emerge as the inputs are shaped by a changing circuit dynamics. Our findings imply a novel mechanism by which sensory-related information is selected and integrated for contextual computations