543 research outputs found
Dimensionality reduction beyond neural subspaces with slice tensor component analysis
Recent work has argued that large-scale neural recordings are often well described by patterns of coactivation across neurons. Yet the view that neural variability is constrained to a fixed, low-dimensional subspace may overlook higher-dimensional structure, including stereotyped neural sequences or slowly evolving latent spaces. Here we argue that task-relevant variability in neural data can also cofluctuate over trials or time, defining distinct ‘covariability classes’ that may co-occur within the same dataset. To demix these covariability classes, we develop sliceTCA (slice tensor component analysis), a new unsupervised dimensionality reduction method for neural data tensors. In three example datasets, including motor cortical activity during a classic reaching task in primates and recent multiregion recordings in mice, we show that sliceTCA can capture more task-relevant structure in neural data using fewer components than traditional methods. Overall, our theoretical framework extends the classic view of low-dimensional population activity by incorporating additional classes of latent variables capturing higher-dimensional structure
Low Tensor Rank Learning of Neural Dynamics
Learning relies on coordinated synaptic changes in recurrently connected
populations of neurons. Therefore, understanding the collective evolution of
synaptic connectivity over learning is a key challenge in neuroscience and
machine learning. In particular, recent work has shown that the weight matrices
of task-trained RNNs are typically low rank, but how this low rank structure
unfolds over learning is unknown. To address this, we investigate the rank of
the 3-tensor formed by the weight matrices throughout learning. By fitting RNNs
of varying rank to large-scale neural recordings during a motor learning task,
we find that the inferred weights are low-tensor-rank and therefore evolve over
a fixed low-dimensional subspace throughout the entire course of learning. We
next validate the observation of low-tensor-rank learning on an RNN trained to
solve the same task by performing a low-tensor-rank decomposition directly on
the ground truth weights, and by showing that the method we applied to the data
faithfully recovers this low rank structure. Finally, we present a set of
mathematical results bounding the matrix and tensor ranks of gradient descent
learning dynamics which show that low-tensor-rank weights emerge naturally in
RNNs trained to solve low-dimensional tasks. Taken together, our findings
provide novel constraints on the evolution of population connectivity over
learning in both biological and artificial neural networks, and enable reverse
engineering of learning-induced changes in recurrent network dynamics from
large-scale neural recordings.Comment: The last two authors contributed equall
- …