158 research outputs found

    How connectivity rules and synaptic properties shape the efficacy of pattern separation in the entorhinal cortex–dentate gyrus–CA3 network

    Get PDF
    Pattern separation is a fundamental brain computation that converts small differences in input patterns into large differences in output patterns. Several synaptic mechanisms of pattern separation have been proposed, including code expansion, inhibition and plasticity; however, which of these mechanisms play a role in the entorhinal cortex (EC)–dentate gyrus (DG)–CA3 circuit, a classical pattern separation circuit, remains unclear. Here we show that a biologically realistic, full-scale EC–DG–CA3 circuit model, including granule cells (GCs) and parvalbumin-positive inhibitory interneurons (PV+-INs) in the DG, is an efficient pattern separator. Both external gamma-modulated inhibition and internal lateral inhibition mediated by PV+-INs substantially contributed to pattern separation. Both local connectivity and fast signaling at GC–PV+-IN synapses were important for maximum effectiveness. Similarly, mossy fiber synapses with conditional detonator properties contributed to pattern separation. By contrast, perforant path synapses with Hebbian synaptic plasticity and direct EC–CA3 connection shifted the network towards pattern completion. Our results demonstrate that the specific properties of cells and synapses optimize higher-order computations in biological networks and might be useful to improve the deep learning capabilities of technical networks

    IST Austria Thesis

    Get PDF
    Brain function is mediated by complex dynamical interactions between excitatory and inhibitory cell types. The Cholecystokinin-expressing inhibitory cells (CCK-interneurons) are one of the least studied types, despite being suspected to play important roles in cognitive processes. We studied the network effects of optogenetic silencing of CCK-interneurons in the CA1 hippocampal area during exploration and sleep states. The cell firing pattern in response to light pulses allowed us to classify the recorded neurons in 5 classes, including disinhibited and non-responsive pyramidal cell and interneurons, and the inhibited interneurons corresponding to the CCK group. The light application, which inhibited the activity of CCK interneurons triggered wider changes in the firing dynamics of cells. We observed rate changes (i.e. remapping) of pyramidal cells during the exploration session in which the light was applied relative to the previous control session that was not restricted neither in time nor space to the light delivery. Also, the disinhibited pyramidal cells had higher increase in bursting than in single spike firing rate as a result of CCK silencing. In addition, the firing activity patterns during exploratory periods were more weakly reactivated in sleep for those periods in which CCK-interneuron were silenced than in the unaffected periods. Furthermore, light pulses during sleep disrupted the reactivation of recent waking patterns. Hence, silencing CCK neurons during exploration suppressed the reactivation of waking firing patterns in sleep and CCK interneuron activity was also required during sleep for the normal reactivation of waking patterns. These findings demonstrate the involvement of CCK cells in reactivation-related memory consolidation. An important part of our analysis was to test the relationship of the identified CCKinterneurons to brain oscillations. Our findings showed that these cells exhibited different oscillatory behaviour during anaesthesia and natural waking and sleep conditions. We showed that: 1) Contrary to the past studies performed under anaesthesia, the identified CCKinterneurons fired on the descending portion of the theta phase in waking exploration. 2) CCKinterneuron preferred phases around the trough of gamma oscillations. 3) Contrary to anaesthesia conditions, the average firing rate of the CCK-interneurons increased around the peak activity of the sharp-wave ripple (SWR) events in natural sleep, which is congruent with new reports about their functional connectivity. We also found that light driven CCK-interneuron silencing altered the dynamics on the CA1 network oscillatory activity: 1) Pyramidal cells negatively shifted their preferred theta phases when the light was applied, while interneurons responses were less consistent. 2) As a population, pyramidal cells negatively shifted their preferred activity during gamma oscillations, albeit we did not find gamma modulation differences related to the light application when pyramidal cells were subdivided into the disinhibited and unaffected groups. 3) During the peak of SWR events, all but the CCK-interneurons had a reduction in their relative firing rate change during the light application as compared to the change observed at SWR initiation. Finally, regarding to the place field activity of the recorded pyramidal neurons, we showed that the disinhibited pyramidal cells had reduced place field similarity, coherence and spatial information, but only during the light application. The mechanisms behind such observed behaviours might involve eCB signalling and plastic changes in CCK-interneuron synapses. In conclusion, the observed changes related to the light-mediated silencing of CCKinterneurons have unravelled characteristics of this interneuron subpopulation that might change the understanding not only of their particular network interactions, but also of the current theories about the emergence of certain cognitive processes such as place coding needed for navigation or hippocampus-dependent memory consolidation

    Infomorphic networks: Locally learning neural networks derived from partial information decomposition

    Full text link
    Understanding the intricate cooperation among individual neurons in performing complex tasks remains a challenge to this date. In this paper, we propose a novel type of model neuron that emulates the functional characteristics of biological neurons by optimizing an abstract local information processing goal. We have previously formulated such a goal function based on principles from partial information decomposition (PID). Here, we present a corresponding parametric local learning rule which serves as the foundation of "infomorphic networks" as a novel concrete model of neural networks. We demonstrate the versatility of these networks to perform tasks from supervised, unsupervised and memory learning. By leveraging the explanatory power and interpretable nature of the PID framework, these infomorphic networks represent a valuable tool to advance our understanding of cortical function.Comment: 31 pages, 5 figure

    Computing with Synchrony

    Get PDF

    Memory capacity in the hippocampus

    Get PDF
    Neural assemblies in hippocampus encode positions. During rest, the hippocam- pus replays sequences of neural activity seen during awake behavior. This replay is linked to memory consolidation and mental exploration of the environment. Re- current networks can be used to model the replay of sequential activity. Multiple sequences can be stored in the synaptic connections. To achieve a high mem- ory capacity, recurrent networks require a pattern separation mechanism. Such a mechanism is global remapping, observed in place cell populations. A place cell fires at a particular position of an environment and is silent elsewhere. Multiple place cells usually cover an environment with their firing fields. Small changes in the environment or context of a behavioral task can cause global remapping, i.e. profound changes in place cell firing fields. Global remapping causes some cells to cease firing, other silent cells to gain a place field, and other place cells to move their firing field and change their peak firing rate. The effect is strong enough to make global remapping a viable pattern separation mechanism. We model two mechanisms that improve the memory capacity of recurrent net- works. The effect of inhibition on replay in a recurrent network is modeled using binary neurons and binary synapses. A mean field approximation is used to de- termine the optimal parameters for the inhibitory neuron population. Numerical simulations of the full model were carried out to verify the predictions of the mean field model. A second model analyzes a hypothesized global remapping mecha- nism, in which grid cell firing is used as feed forward input to place cells. Grid cells have multiple firing fields in the same environment, arranged in a hexagonal grid. Grid cells can be used in a model as feed forward inputs to place cells to produce place fields. In these grid-to-place cell models, shifts in the grid cell firing patterns cause remapping in the place cell population. We analyze the capacity of such a system to create sets of separated patterns, i.e. how many different spatial codes can be generated. The limiting factor are the synapses connecting grid cells to place cells. To assess their capacity, we produce different place codes in place and grid cell populations, by shuffling place field positions and shifting grid fields of grid cells. Then we use Hebbian learning to increase the synaptic weights be- tween grid and place cells for each set of grid and place code. The capacity limit is reached when synaptic interference makes it impossible to produce a place code with sufficient spatial acuity from grid cell firing. Additionally, it is desired to also maintain the place fields compact, or sparse if seen from a coding standpoint. Of course, as more environments are stored, the sparseness is lost. Interestingly, place cells lose the sparseness of their firing fields much earlier than their spatial acuity. For the sequence replay model we are able to increase capacity in a simulated recurrent network by including an inhibitory population. We show that even in this more complicated case, capacity is improved. We observe oscillations in the average activity of both excitatory and inhibitory neuron populations. The oscillations get stronger at the capacity limit. In addition, at the capacity limit, rather than observing a sudden failure of replay, we find sequences are replayed transiently for a couple of time steps before failing. Analyzing the remapping model, we find that, as we store more spatial codes in the synapses, first the sparseness of place fields is lost. Only later do we observe a decay in spatial acuity of the code. We found two ways to maintain sparse place fields while achieving a high capacity: inhibition between place cells, and partitioning the place cell population so that learning affects only a small fraction of them in each environment. We present scaling predictions that suggest that hundreds of thousands of spatial codes can be produced by this pattern separation mechanism. The effect inhibition has on the replay model is two-fold. Capacity is increased, and the graceful transition from full replay to failure allows for higher capacities when using short sequences. Additional mechanisms not explored in this model could be at work to concatenate these short sequences, or could perform more complex operations on them. The interplay of excitatory and inhibitory populations gives rise to oscillations, which are strongest at the capacity limit. The oscillation draws a picture of how a memory mechanism can cause hippocampal oscillations as observed in experiments. In the remapping model we showed that sparseness of place cell firing is constraining the capacity of this pattern separation mechanism. Grid codes outperform place codes regarding spatial acuity, as shown in Mathis et al. (2012). Our model shows that the grid-to-place transformation is not harnessing the full spatial information from the grid code in order to maintain sparse place fields. This suggests that the two codes are independent, and communication between the areas might be mostly for synchronization. High spatial acuity seems to be a specialization of the grid code, while the place code is more suitable for memory tasks. In a detailed model of hippocampal replay we show that feedback inhibition can increase the number of sequences that can be replayed. The effect of inhibition on capacity is determined using a meanfield model, and the results are verified with numerical simulations of the full network. Transient replay is found at the capacity limit, accompanied by oscillations that resemble sharp wave ripples in hippocampus. In a second model Hippocampal replay of neuronal activity is linked to memory consolidation and mental exploration. Furthermore, replay is a potential neural correlate of episodic memory. To model hippocampal sequence replay, recurrent neural networks are used. Memory capacity of such networks is of great interest to determine their biological feasibility. And additionally, any mechanism that improves capacity has explanatory power. We investigate two such mechanisms. The first mechanism to improve capacity is global, unspecific feedback inhibition for the recurrent network. In a simplified meanfield model we show that capacity is indeed improved. The second mechanism that increases memory capacity is pattern separation. In the spatial context of hippocampal place cell firing, global remapping is one way to achieve pattern separation. Changes in the environment or context of a task cause global remapping. During global remapping, place cell firing changes in unpredictable ways: cells shift their place fields, or fully cease firing, and formerly silent cells acquire place fields. Global remapping can be triggered by subtle changes in grid cells that give feed-forward inputs to hippocampal place cells. We investigate the capacity of the underlying synaptic connections, defined as the number of different environments that can be represented at a given spatial acuity. We find two essential conditions to achieve a high capacity and sparse place fields: inhibition between place cells, and partitioning the place cell population so that learning affects only a small fraction of them in each environments. We also find that sparsity of place fields is the constraining factor of the model rather than spatial acuity. Since the hippocampal place code is sparse, we conclude that the hippocampus does not fully harness the spatial information available in the grid code. The two codes of space might thus serve different purposes

    A unified approach to linking experimental, statistical and computational analysis of spike train data

    Get PDF
    A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data), but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach – linking statistical, computational, and experimental neuroscience – provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data.Published versio

    Drift and stabilization of cortical response selectivity

    Get PDF
    Synaptic turnover and long term functional stability are two seemingly contradicting features of neuronal networks, which show varying expressions across different brain regions. Recent studies have shown, how both of these are strongly expressed in the hippocampus, raising the question how this can be reconciled within a biological network. In this work, I use a data set of neuron activity from mice behaving within a virtual environment recorded over up to several months to extend and develop methods, showing how the activity of hundreds of neurons per individual animal can be reliably tracked and characterized. I employ these methods to analyze network- and individual neuron behavior during the initial formation of a place map from the activity of individual place cells while the animal learns to navigate in a new environment, as well as during the condition of a constant environment over several weeks. In a published study included in this work, we find that map formation is driven by selective stabilization of place cells coding for salient regions, with distinct characteristics for neurons coding for landmark, reward, or other locations. Strikingly, we find that in mice lacking Shank2, an autism spectrum disorder (ASD)-linked gene encoding an excitatory postsynaptic scaffold protein, a characteristic overrepresentation of visual landmarks is missing while the overrepresentation of reward location remains intact, suggesting different underlying mechanisms in the stabilization. In the condition of a constant environment, I find how turnover dynamics largely decouple from the location of a place field and are governed by a strong decorrelation of population activity on short time scales (hours to days), followed by long-lasting correlations (days to months) above chance level. In agreement with earlier studies, I find a slow, constant drift in the population of active neurons, while – contrary to earlier results – place fields within the active population are assumed approximately randomly. Place field movement across days is governed by periods of stability around an anchor position, interrupted by random, long-range relocation. The data does not suggest the existence of populations of neurons showing distinct properties of stability, but rather shows a continuous range from highly unstable to very stable functional- and non-functional activity. Average timescales of reliable contributions to the neural code are on the order of few days, in agreement with earlier reported timescales of synaptic turnover in the hippocampus.2021-08-0

    Geometry and Topology in Memory and Navigation

    Get PDF
    Okinawa Institute of Science and Technology Graduate UniversityDoctor of PhilosophyGeometry and topology offer rich mathematical worlds and perspectives with which to study and improve our understanding of cognitive function. Here I present the following examples: (1) a functional role for inhibitory diversity in associative memories with graph- ical relationships; (2) improved memory capacity in an associative memory model with setwise connectivity, with implications for glial and dendritic function; (3) safe and effi- cient group navigation among conspecifics using purely local geometric information; and (4) enhancing geometric and topological methods to probe the relations between neural activity and behaviour. In each work, tools and insights from geometry and topology are used in essential ways to gain improved insights or performance. This thesis contributes to our knowledge of the potential computational affordances of biological mechanisms (such as inhibition and setwise connectivity), while also demonstrating new geometric and topological methods and perspectives with which to deepen our understanding of cognitive tasks and their neural representations.doctoral thesi
    • …
    corecore