20 research outputs found
Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes
Humans and animals have a rich and flexible understanding of the physical
world, which enables them to infer the underlying dynamical trajectories of
objects and events, plausible future states, and use that to plan and
anticipate the consequences of actions. However, the neural mechanisms
underlying these computations are unclear. We combine a goal-driven modeling
approach with dense neurophysiological data and high-throughput human
behavioral readouts to directly impinge on this question. Specifically, we
construct and evaluate several classes of sensory-cognitive networks to predict
the future state of rich, ethologically-relevant environments, ranging from
self-supervised end-to-end models with pixel-wise or object-centric objectives,
to models that future predict in the latent space of purely static image-based
or dynamic video-based pretrained foundation models. We find strong
differentiation across these model classes in their ability to predict neural
and behavioral data both within and across diverse environments. In particular,
we find that neural responses are currently best predicted by models trained to
predict the future state of their environment in the latent space of pretrained
foundation models optimized for dynamic scenes in a self-supervised manner.
Notably, models that future predict in the latent space of video foundation
models that are optimized to support a diverse range of sensorimotor tasks,
reasonably match both human behavioral error patterns and neural dynamics
across all environmental scenarios that we were able to test. Overall, these
findings suggest that the neural mechanisms and behaviors of primate mental
simulation are thus far most consistent with being optimized to future predict
on dynamic, reusable visual representations that are useful for embodied AI
more generally.Comment: 17 pages, 6 figure
Recommended from our members
Meta-learning Hebbian plasticity for continual familiarity detection
Memories are stored and recalled throughout the lifetime of an animal, but many models of memory, including previous models of familiarity detection, do not operate in a continuous manner. We consider a family of models that recognize previously experienced stimuli and, importantly, operate and learn continuously. Specifically, we investigate a learning paradigm in which stimuli are presented in a streaming fashion with repetitions at various intervals, and the subject/model must report whether the current stimulus has previously appeared in the stream. We propose a feedforward network architecture with ongoing plasticity in the synaptic weight matrix. Parameters governing plasticity and static network parameters are meta-learned using gradient descent to optimize the continual familiarity detection process. This architecture, unlike recurrent networks without ongoing plasticity, generalizes easily over a range of repeat intervals even if trained with a single interval. We show that an anti-Hebbian plasticity rule (co-activated neurons cause synaptic depression) enables repeat detection over much longer intervals than a Hebbian one, and this is the solution most readily found by meta-learning. This rule leads to experimentally observed features such as repeat suppression in the hidden layer neurons. In contrast to previous theoretical work, the capacity of these networks remains constant across their lifetimes, meaning that pairs of stimuli with a given temporal separation are stored and recognized as familiar independent of the network's input history. We also consider learning rules that use an external gating circuit to control plasticity. Collectively, these models demonstrate a range of different psychometric curves that we compare to human performance.
Keywords: learning, memory, recognition, familiarity, novelty detection, meta-learning, Hebbian, synaptic plasticit
Constructing neural network models from brain data reveals representational transformations linked to adaptive behavior
The human ability to adaptively implement a wide variety of tasks is thought to emerge from the dynamic transformation of cognitive information. We hypothesized that these transformations are implemented via conjunctive activations in “conjunction hubs”—brain regions that selectively integrate sensory, cognitive, and motor activations. We used recent advances in using functional connectivity to map the flow of activity between brain regions to construct a task-performing neural network model from fMRI data during a cognitive control task. We verified the importance of conjunction hubs in cognitive computations by simulating neural activity flow over this empirically-estimated functional connectivity model. These empiricallyspecified simulations produced above-chance task performance (motor responses) by integrating sensory and task rule activations in conjunction hubs. These findings reveal the role of conjunction hubs in supporting flexible cognitive computations, while demonstrating the feasibility of using empirically-estimated neural network models to gain insight into cognitive computations in the human brain
Recommended from our members
Biological learning in key-value memory networks
In neuroscience, classical Hopfield networks are the standard biologically plausible model of long-term memory, relying on Hebbian plasticity for storage and attractor dynamics for recall. In contrast, memory-augmented neural networks in machine learning commonly use a key-value mechanism to store and read out memories in a single step. Such augmented networks achieve impressive feats of memory compared to traditional variants, yet it remains unclear whether they can be implemented by biological systems. In our work, we bridge this gap by proposing a set of of biologically plausible three-factor plasticity rules for a basic feedforward key-value memory network. Keys are stored in the input-to-hidden synaptic weights by a "non-Hebbian" rule, controlled only by pre-synaptic activity, and modulated by local third factors which represent dendtritic spikes. Values are stored in the hidden-to-output weights by a Hebbian rule, with the pre-synaptic neuron selected through softmax attention which represents recurrent inhibition. The same rules are recovered when network parameters are meta-learned. Our network performs on par with classical Hopfield networks on autoassociative memory tasks and can be naturally extended to correlated inputs, continual recall, heteroassociative memory, and sequence learning. Importantly, since memories are stored in slots indexed by hidden layer neurons, unlike the fully distributed representation in the classical Hopfield network, they can be individually selected for extended storage or rapid decay. Finally, our memory network can easily be incorporated into a larger neural system, either as a memory bank for an external controller, or as a fast learning system used in conjunction with a slow one. Overall, our results suggest a compelling alternative to the classical Hopfield network as a model of biological long-term memory.
Keywords: learning, memory, synaptic plasticity, Hebbian, key-value memory, neural network, three-factor plasticit
To reverse engineer an entire nervous system
There are many theories of how behavior may be controlled by neurons. Testing
and refining these theories would be greatly facilitated if we could correctly
simulate an entire nervous system so we could replicate the brain dynamics in
response to any stimuli or contexts. Besides, simulating a nervous system is in
itself one of the big dreams in systems neuroscience. However, doing so
requires us to identify how each neuron's output depends on its inputs, a
process we call reverse engineering. Current efforts at this focus on the
mammalian nervous system, but these brains are mind-bogglingly complex,
allowing only recordings of tiny subsystems. Here we argue that the time is
ripe for systems neuroscience to embark on a concerted effort to reverse
engineer a smaller system and that Caenorhabditis elegans is the ideal
candidate system as the established optophysiology techniques can capture and
control each neuron's activity and scale to hundreds of thousands of
experiments. Data across populations and behaviors can be combined because
across individuals the nervous system is largely conserved in form and
function. Modern machine-learning-based modeling should then enable a
simulation of C. elegans' impressive breadth of brain states and behaviors. The
ability to reverse engineer an entire nervous system will benefit the design of
artificial intelligence systems and all of systems neuroscience, enabling
fundamental insights as well as new approaches for investigations of
progressively larger nervous systems.Comment: 23 pages, 2 figures, opinion pape
Finishing the euchromatic sequence of the human genome
The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead
26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3 - Meeting Abstracts - Antwerp, Belgium. 15–20 July 2017
This work was produced as part of the activities of FAPESP Research,\ud
Disseminations and Innovation Center for Neuromathematics (grant\ud
2013/07699-0, S. Paulo Research Foundation). NLK is supported by a\ud
FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially\ud
supported by a CNPq fellowship (grant 306251/2014-0)