131 research outputs found

    The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation

    Get PDF
    The hippocampal-entorhinal system is important for spatial and relational memory tasks. We formally link these domains, provide a mechanistic understanding of the hippocampal role in generalization, and offer unifying principles underlying many entorhinal and hippocampal cell types. We propose medial entorhinal cells form a basis describing structural knowledge, and hippocampal cells link this basis with sensory representations. Adopting these principles, we introduce the Tolman-Eichenbaum machine (TEM). After learning, TEM entorhinal cells display diverse properties resembling apparently bespoke spatial responses, such as grid, band, border, and object-vector cells. TEM hippocampal cells include place and landmark cells that remap between environments. Crucially, TEM also aligns with empirically recorded representations in complex non-spatial tasks. TEM also generates predictions that hippocampal remapping is not random as previously believed; rather, structural knowledge is preserved across environments. We confirm this structural transfer over remapping in simultaneously recorded place and grid cells

    Spike-Based Reinforcement Learning in Continuous State and Action Space: When Policy Gradient Methods Fail

    Get PDF
    Changes of synaptic connections between neurons are thought to be the physiological basis of learning. These changes can be gated by neuromodulators that encode the presence of reward. We study a family of reward-modulated synaptic learning rules for spiking neurons on a learning task in continuous space inspired by the Morris Water maze. The synaptic update rule modifies the release probability of synaptic transmission and depends on the timing of presynaptic spike arrival, postsynaptic action potentials, as well as the membrane potential of the postsynaptic neuron. The family of learning rules includes an optimal rule derived from policy gradient methods as well as reward modulated Hebbian learning. The synaptic update rule is implemented in a population of spiking neurons using a network architecture that combines feedforward input with lateral connections. Actions are represented by a population of hypothetical action cells with strong mexican-hat connectivity and are read out at theta frequency. We show that in this architecture, a standard policy gradient rule fails to solve the Morris watermaze task, whereas a variant with a Hebbian bias can learn the task within 20 trials, consistent with experiments. This result does not depend on implementation details such as the size of the neuronal populations. Our theoretical approach shows how learning new behaviors can be linked to reward-modulated plasticity at the level of single synapses and makes predictions about the voltage and spike-timing dependence of synaptic plasticity and the influence of neuromodulators such as dopamine. It is an important step towards connecting formal theories of reinforcement learning with neuronal and synaptic properties

    Interacting Turing-Hopf Instabilities Drive Symmetry-Breaking Transitions in a Mean-Field Model of the Cortex: A Mechanism for the Slow Oscillation

    Get PDF
    Electrical recordings of brain activity during the transition from wake to anesthetic coma show temporal and spectral alterations that are correlated with gross changes in the underlying brain state. Entry into anesthetic unconsciousness is signposted by the emergence of large, slow oscillations of electrical activity (≲1  Hz) similar to the slow waves observed in natural sleep. Here we present a two-dimensional mean-field model of the cortex in which slow spatiotemporal oscillations arise spontaneously through a Turing (spatial) symmetry-breaking bifurcation that is modulated by a Hopf (temporal) instability. In our model, populations of neurons are densely interlinked by chemical synapses, and by interneuronal gap junctions represented as an inhibitory diffusive coupling. To demonstrate cortical behavior over a wide range of distinct brain states, we explore model dynamics in the vicinity of a general-anesthetic-induced transition from “wake” to “coma.” In this region, the system is poised at a codimension-2 point where competing Turing and Hopf instabilities coexist. We model anesthesia as a moderate reduction in inhibitory diffusion, paired with an increase in inhibitory postsynaptic response, producing a coma state that is characterized by emergent low-frequency oscillations whose dynamics is chaotic in time and space. The effect of long-range axonal white-matter connectivity is probed with the inclusion of a single idealized point-to-point connection. We find that the additional excitation from the long-range connection can provoke seizurelike bursts of cortical activity when inhibitory diffusion is weak, but has little impact on an active cortex. Our proposed dynamic mechanism for the origin of anesthetic slow waves complements—and contrasts with—conventional explanations that require cyclic modulation of ion-channel conductances. We postulate that a similar bifurcation mechanism might underpin the slow waves of natural sleep and comment on the possible consequences of chaotic dynamics for memory processing and learning

    Interacting Turing-Hopf Instabilities Drive Symmetry-Breaking Transitions in a Mean-Field Model of the Cortex: A Mechanism for the Slow Oscillation

    Get PDF
    Electrical recordings of brain activity during the transition from wake to anesthetic coma show temporal and spectral alterations that are correlated with gross changes in the underlying brain state. Entry into anesthetic unconsciousness is signposted by the emergence of large, slow oscillations of electrical activity (≲1  Hz) similar to the slow waves observed in natural sleep. Here we present a two-dimensional mean-field model of the cortex in which slow spatiotemporal oscillations arise spontaneously through a Turing (spatial) symmetry-breaking bifurcation that is modulated by a Hopf (temporal) instability. In our model, populations of neurons are densely interlinked by chemical synapses, and by interneuronal gap junctions represented as an inhibitory diffusive coupling. To demonstrate cortical behavior over a wide range of distinct brain states, we explore model dynamics in the vicinity of a general-anesthetic-induced transition from “wake” to “coma.” In this region, the system is poised at a codimension-2 point where competing Turing and Hopf instabilities coexist. We model anesthesia as a moderate reduction in inhibitory diffusion, paired with an increase in inhibitory postsynaptic response, producing a coma state that is characterized by emergent low-frequency oscillations whose dynamics is chaotic in time and space. The effect of long-range axonal white-matter connectivity is probed with the inclusion of a single idealized point-to-point connection. We find that the additional excitation from the long-range connection can provoke seizurelike bursts of cortical activity when inhibitory diffusion is weak, but has little impact on an active cortex. Our proposed dynamic mechanism for the origin of anesthetic slow waves complements—and contrasts with—conventional explanations that require cyclic modulation of ion-channel conductances. We postulate that a similar bifurcation mechanism might underpin the slow waves of natural sleep and comment on the possible consequences of chaotic dynamics for memory processing and learning

    Functional relevance of homeostatic intrinsic plasticity in neurons and networks

    Get PDF
    Maintaining the intrinsic excitability of neurons is crucial for stable brain activity. This can be achieved by the homeostatic regulation of membrane ion channel conductances, although it is not well understood how these processes influence broader aspects of neuron and network function. One of the many mechanisms which contribute towards this task is the modulation of potassium channel conductances by activity-dependent nitric oxide signalling. Here, we first investigate this mechanism in a conductance-based neuron model. By fitting the model to experimental data we find that nitric oxide signalling improves synaptic transmission fidelity at high firing rates, but that there is an increase in the metabolic cost of action potentials associated with this improvement. Although the improvement in function had been observed previously in experiment, the metabolic constraint was unknown. This additional constraint provides a plausible explanation for the selective activation of nitric oxide signalling only at high firing rates. In addition to mediating homeostatic control of intrinsic excitability, nitric oxide can diffuse freely across cell membranes, providing a unique mechanism for neurons to communicate within a network, independent of synaptic connectivity. We next conduct a theoretical investigation of the distinguishing roles of diffusive homeostasis mediated by nitric oxide in comparison with canonical non-diffusive homeostasis in cortical networks. We find that both forms of homeostasis robustly maintain stable activity. However, the resulting networks differ, with diffusive homeostasis maintaining substantial heterogeneity in activity levels of individual neurons, a feature disrupted in networks with non-diffusive homeostasis. This results in networks capable of representing input heterogeneity, and linearly responding over a broader range of inputs than those undergoing non-diffusive homeostasis.We further show that diffusive homeostasis interferes less than non-diffusive homeostasis in the synaptic weight dynamics of networks undergoing Hebbian plasticity. Overall, these results suggest a novel homeostatic mechanism for maintaining stable network activity while simultaneously minimising metabolic cost and conserving network functionality

    Models of spatial representation in the medial entorhinal cortex

    Get PDF
    Komplexe kognitive Funktionen wie Gedächtnisbildung, Navigation und Entscheidungsprozesse hängen von der Kommunikation zwischen Hippocampus und Neokortex ab. An der Schnittstelle dieser beiden Gehirnregionen liegt der entorhinale Kortex - ein Areal, das Neurone mit bemerkenswerten räumlichen Repräsentationen enthält: Gitterzellen. Gitterzellen sind Neurone, die abhängig von der Position eines Tieres in seiner Umgebung feuern und deren Feuerfelder ein dreieckiges Muster bilden. Man vermutet, dass Gitterzellen Navigation und räumliches Gedächtnis unterstützen, aber die Mechanismen, die diese Muster erzeugen, sind noch immer unbekannt. In dieser Dissertation untersuche ich mathematische Modelle neuronaler Schaltkreise, um die Entstehung, Weitervererbung und Verstärkung von Gitterzellaktivität zu erklären. Zuerst konzentriere ich mich auf die Entstehung von Gittermustern. Ich folge der Idee, dass periodische Repräsentationen des Raumes durch Konkurrenz zwischen dauerhaft aktiven, räumlichen Inputs und der Tendenz eines Neurons, durchgängiges Feuern zu vermeiden, entstehen könnten. Aufbauend auf vorangegangenen theoretischen Arbeiten stelle ich ein Einzelzell-Modell vor, das gitterartige Aktivität allein durch räumlich-irreguläre Inputs, Feuerratenadaptation und Hebbsche synaptische Plastizität erzeugt. Im zweiten Teil der Dissertation untersuche ich den Einfluss von Netzwerkdynamik auf das Gitter-Tuning. Ich zeige, dass Gittermuster zwischen neuronalen Populationen weitervererbt werden können und dass sowohl vorwärts gerichtete als auch rekurrente Verbindungen die Regelmäßigkeit von räumlichen Feuermustern verbessern können. Schließlich zeige ich, dass eine entsprechende Konnektivität, die diese Funktionen unterstützt, auf unüberwachte Weise entstehen könnte. Insgesamt trägt diese Arbeit zu einem besseren Verständnis der Prinzipien der neuronalen Repräsentation des Raumes im medialen entorhinalen Kortex bei.High-level cognitive abilities such as memory, navigation, and decision making rely on the communication between the hippocampal formation and the neocortex. At the interface between these two brain regions is the entorhinal cortex, a multimodal association area where neurons with remarkable representations of self-location have been discovered: the grid cells. Grid cells are neurons that fire according to the position of an animal in its environment and whose firing fields form a periodic triangular pattern. Grid cells are thought to support animal's navigation and spatial memory, but the cellular mechanisms that generate their tuning are still unknown. In this thesis, I study computational models of neural circuits to explain the emergence, inheritance, and amplification of grid-cell activity. In the first part of the thesis, I focus on the initial formation of grid-cell tuning. I embrace the idea that periodic representations of space could emerge via a competition between persistently-active spatial inputs and the reluctance of a neuron to fire for long stretches of time. Building upon previous theoretical work, I propose a single-cell model that generates grid-like activity solely form spatially-irregular inputs, spike-rate adaptation, and Hebbian synaptic plasticity. In the second part of the thesis, I study the inheritance and amplification of grid-cell activity. Motivated by the architecture of entorhinal microcircuits, I investigate how feed-forward and recurrent connections affect grid-cell tuning. I show that grids can be inherited across neuronal populations, and that both feed-forward and recurrent connections can improve the regularity of spatial firing. Finally, I show that a connectivity supporting these functions could self-organize in an unsupervised manner. Altogether, this thesis contributes to a better understanding of the principles governing the neuronal representation of space in the medial entorhinal cortex

    Memory capacity in the hippocampus

    Get PDF
    Neural assemblies in hippocampus encode positions. During rest, the hippocam- pus replays sequences of neural activity seen during awake behavior. This replay is linked to memory consolidation and mental exploration of the environment. Re- current networks can be used to model the replay of sequential activity. Multiple sequences can be stored in the synaptic connections. To achieve a high mem- ory capacity, recurrent networks require a pattern separation mechanism. Such a mechanism is global remapping, observed in place cell populations. A place cell fires at a particular position of an environment and is silent elsewhere. Multiple place cells usually cover an environment with their firing fields. Small changes in the environment or context of a behavioral task can cause global remapping, i.e. profound changes in place cell firing fields. Global remapping causes some cells to cease firing, other silent cells to gain a place field, and other place cells to move their firing field and change their peak firing rate. The effect is strong enough to make global remapping a viable pattern separation mechanism. We model two mechanisms that improve the memory capacity of recurrent net- works. The effect of inhibition on replay in a recurrent network is modeled using binary neurons and binary synapses. A mean field approximation is used to de- termine the optimal parameters for the inhibitory neuron population. Numerical simulations of the full model were carried out to verify the predictions of the mean field model. A second model analyzes a hypothesized global remapping mecha- nism, in which grid cell firing is used as feed forward input to place cells. Grid cells have multiple firing fields in the same environment, arranged in a hexagonal grid. Grid cells can be used in a model as feed forward inputs to place cells to produce place fields. In these grid-to-place cell models, shifts in the grid cell firing patterns cause remapping in the place cell population. We analyze the capacity of such a system to create sets of separated patterns, i.e. how many different spatial codes can be generated. The limiting factor are the synapses connecting grid cells to place cells. To assess their capacity, we produce different place codes in place and grid cell populations, by shuffling place field positions and shifting grid fields of grid cells. Then we use Hebbian learning to increase the synaptic weights be- tween grid and place cells for each set of grid and place code. The capacity limit is reached when synaptic interference makes it impossible to produce a place code with sufficient spatial acuity from grid cell firing. Additionally, it is desired to also maintain the place fields compact, or sparse if seen from a coding standpoint. Of course, as more environments are stored, the sparseness is lost. Interestingly, place cells lose the sparseness of their firing fields much earlier than their spatial acuity. For the sequence replay model we are able to increase capacity in a simulated recurrent network by including an inhibitory population. We show that even in this more complicated case, capacity is improved. We observe oscillations in the average activity of both excitatory and inhibitory neuron populations. The oscillations get stronger at the capacity limit. In addition, at the capacity limit, rather than observing a sudden failure of replay, we find sequences are replayed transiently for a couple of time steps before failing. Analyzing the remapping model, we find that, as we store more spatial codes in the synapses, first the sparseness of place fields is lost. Only later do we observe a decay in spatial acuity of the code. We found two ways to maintain sparse place fields while achieving a high capacity: inhibition between place cells, and partitioning the place cell population so that learning affects only a small fraction of them in each environment. We present scaling predictions that suggest that hundreds of thousands of spatial codes can be produced by this pattern separation mechanism. The effect inhibition has on the replay model is two-fold. Capacity is increased, and the graceful transition from full replay to failure allows for higher capacities when using short sequences. Additional mechanisms not explored in this model could be at work to concatenate these short sequences, or could perform more complex operations on them. The interplay of excitatory and inhibitory populations gives rise to oscillations, which are strongest at the capacity limit. The oscillation draws a picture of how a memory mechanism can cause hippocampal oscillations as observed in experiments. In the remapping model we showed that sparseness of place cell firing is constraining the capacity of this pattern separation mechanism. Grid codes outperform place codes regarding spatial acuity, as shown in Mathis et al. (2012). Our model shows that the grid-to-place transformation is not harnessing the full spatial information from the grid code in order to maintain sparse place fields. This suggests that the two codes are independent, and communication between the areas might be mostly for synchronization. High spatial acuity seems to be a specialization of the grid code, while the place code is more suitable for memory tasks. In a detailed model of hippocampal replay we show that feedback inhibition can increase the number of sequences that can be replayed. The effect of inhibition on capacity is determined using a meanfield model, and the results are verified with numerical simulations of the full network. Transient replay is found at the capacity limit, accompanied by oscillations that resemble sharp wave ripples in hippocampus. In a second model Hippocampal replay of neuronal activity is linked to memory consolidation and mental exploration. Furthermore, replay is a potential neural correlate of episodic memory. To model hippocampal sequence replay, recurrent neural networks are used. Memory capacity of such networks is of great interest to determine their biological feasibility. And additionally, any mechanism that improves capacity has explanatory power. We investigate two such mechanisms. The first mechanism to improve capacity is global, unspecific feedback inhibition for the recurrent network. In a simplified meanfield model we show that capacity is indeed improved. The second mechanism that increases memory capacity is pattern separation. In the spatial context of hippocampal place cell firing, global remapping is one way to achieve pattern separation. Changes in the environment or context of a task cause global remapping. During global remapping, place cell firing changes in unpredictable ways: cells shift their place fields, or fully cease firing, and formerly silent cells acquire place fields. Global remapping can be triggered by subtle changes in grid cells that give feed-forward inputs to hippocampal place cells. We investigate the capacity of the underlying synaptic connections, defined as the number of different environments that can be represented at a given spatial acuity. We find two essential conditions to achieve a high capacity and sparse place fields: inhibition between place cells, and partitioning the place cell population so that learning affects only a small fraction of them in each environments. We also find that sparsity of place fields is the constraining factor of the model rather than spatial acuity. Since the hippocampal place code is sparse, we conclude that the hippocampus does not fully harness the spatial information available in the grid code. The two codes of space might thus serve different purposes
    corecore