354 research outputs found

    Structural network heterogeneities and network dynamics: a possible dynamical mechanism for hippocampal memory reactivation

    Full text link
    The hippocampus has the capacity for reactivating recently acquired memories [1-3] and it is hypothesized that one of the functions of sleep reactivation is the facilitation of consolidation of novel memory traces [4-11]. The dynamic and network processes underlying such a reactivation remain, however, unknown. We show that such a reactivation characterized by local, self-sustained activity of a network region may be an inherent property of the recurrent excitatory-inhibitory network with a heterogeneous structure. The entry into the reactivation phase is mediated through a physiologically feasible regulation of global excitability and external input sources, while the reactivated component of the network is formed through induced network heterogeneities during learning. We show that structural changes needed for robust reactivation of a given network region are well within known physiological parameters [12,13].Comment: 16 pages, 5 figure

    Learning in a Unitary Coherent Hippocampus

    Get PDF

    Learning from humans: combining imitation and deep reinforcement learning to accomplish human-level performance on a virtual foraging task

    Full text link
    We develop a method to learn bio-inspired foraging policies using human data. We conduct an experiment where humans are virtually immersed in an open field foraging environment and are trained to collect the highest amount of rewards. A Markov Decision Process (MDP) framework is introduced to model the human decision dynamics. Then, Imitation Learning (IL) based on maximum likelihood estimation is used to train Neural Networks (NN) that map human decisions to observed states. The results show that passive imitation substantially underperforms humans. We further refine the human-inspired policies via Reinforcement Learning (RL), using on-policy algorithms that are more suitable to learn from pre-trained networks. We show that the combination of IL and RL can match human results and that good performance strongly depends on an egocentric representation of the environment. The developed methodology can be used to efficiently learn policies for unmanned vehicles which have to solve missions in an open field environment.Comment: 24 pages, 15 figure

    Recognition without identification, erroneous familiarity, and déjà vu

    Get PDF
    Déjà vu is characterized by the recognition of a situation concurrent with the awareness that this recognition is inappropriate. Although forms of déjà vu resolve in favor of the inappropriate recognition and therefore have behavioral consequences, typical déjà vu experiences resolve in favor of the awareness that the sensation of recognition is inappropriate. The resultant lack of behavioral modification associated with typical déjà vu means that clinicians and experimenters rely heavily on self-report when observing the experience. In this review, we focus on recent déjà vu research. We consider issues facing neuropsychological, neuroscientific, and cognitive experimental frameworks attempting to explore and experimentally generate the experience. In doing this, we suggest the need for more experimentation and amore cautious interpretation of research findings, particularly as many techniques being used to explore déjà vu are in the early stages of development.PostprintPeer reviewe

    The role of ongoing dendritic oscillations in single-neuron dynamics

    Get PDF
    The dendritic tree contributes significantly to the elementary computations a neuron performs while converting its synaptic inputs into action potential output. Traditionally, these computations have been characterized as temporally local, near-instantaneous mappings from the current input of the cell to its current output, brought about by somatic summation of dendritic contributions that are generated in spatially localized functional compartments. However, recent evidence about the presence of oscillations in dendrites suggests a qualitatively different mode of operation: the instantaneous phase of such oscillations can depend on a long history of inputs, and under appropriate conditions, even dendritic oscillators that are remote may interact through synchronization. Here, we develop a mathematical framework to analyze the interactions of local dendritic oscillations, and the way these interactions influence single cell computations. Combining weakly coupled oscillator methods with cable theoretic arguments, we derive phase-locking states for multiple oscillating dendritic compartments. We characterize how the phase-locking properties depend on key parameters of the oscillating dendrite: the electrotonic properties of the (active) dendritic segment, and the intrinsic properties of the dendritic oscillators. As a direct consequence, we show how input to the dendrites can modulate phase-locking behavior and hence global dendritic coherence. In turn, dendritic coherence is able to gate the integration and propagation of synaptic signals to the soma, ultimately leading to an effective control of somatic spike generation. Our results suggest that dendritic oscillations enable the dendritic tree to operate on more global temporal and spatial scales than previously thought

    Assumptions behind grammatical approaches to code-switching: when the blueprint is a red herring

    Get PDF
    Many of the so-called ‘grammars’ of code-switching are based on various underlying assumptions, e.g. that informal speech can be adequately or appropriately described in terms of ‘‘grammar’’; that deep, rather than surface, structures are involved in code-switching; that one ‘language’ is the ‘base’ or ‘matrix’; and that constraints derived from existing data are universal and predictive. We question these assumptions on several grounds. First, ‘grammar’ is arguably distinct from the processes driving speech production. Second, the role of grammar is mediated by the variable, poly-idiolectal repertoires of bilingual speakers. Third, in many instances of CS the notion of a ‘base’ system is either irrelevant, or fails to explain the facts. Fourth, sociolinguistic factors frequently override ‘grammatical’ factors, as evidence from the same language pairs in different settings has shown. No principles proposed to date account for all the facts, and it seems unlikely that ‘grammar’, as conventionally conceived, can provide definitive answers. We conclude that rather than seeking universal, predictive grammatical rules, research on CS should focus on the variability of bilingual grammars

    Grid Cells, Place Cells, and Geodesic Generalization for Spatial Reinforcement Learning

    Get PDF
    Reinforcement learning (RL) provides an influential characterization of the brain's mechanisms for learning to make advantageous choices. An important problem, though, is how complex tasks can be represented in a way that enables efficient learning. We consider this problem through the lens of spatial navigation, examining how two of the brain's location representations—hippocampal place cells and entorhinal grid cells—are adapted to serve as basis functions for approximating value over space for RL. Although much previous work has focused on these systems' roles in combining upstream sensory cues to track location, revisiting these representations with a focus on how they support this downstream decision function offers complementary insights into their characteristics. Rather than localization, the key problem in learning is generalization between past and present situations, which may not match perfectly. Accordingly, although neural populations collectively offer a precise representation of position, our simulations of navigational tasks verify the suggestion that RL gains efficiency from the more diffuse tuning of individual neurons, which allows learning about rewards to generalize over longer distances given fewer training experiences. However, work on generalization in RL suggests the underlying representation should respect the environment's layout. In particular, although it is often assumed that neurons track location in Euclidean coordinates (that a place cell's activity declines “as the crow flies” away from its peak), the relevant metric for value is geodesic: the distance along a path, around any obstacles. We formalize this intuition and present simulations showing how Euclidean, but not geodesic, representations can interfere with RL by generalizing inappropriately across barriers. Our proposal that place and grid responses should be modulated by geodesic distances suggests novel predictions about how obstacles should affect spatial firing fields, which provides a new viewpoint on data concerning both spatial codes

    Evaluation of the Oscillatory Interference Model of Grid Cell Firing through Analysis and Measured Period Variance of Some Biological Oscillators

    Get PDF
    Models of the hexagonally arrayed spatial activity pattern of grid cell firing in the literature generally fall into two main categories: continuous attractor models or oscillatory interference models. Burak and Fiete (2009, PLoS Comput Biol) recently examined noise in two continuous attractor models, but did not consider oscillatory interference models in detail. Here we analyze an oscillatory interference model to examine the effects of noise on its stability and spatial firing properties. We show analytically that the square of the drift in encoded position due to noise is proportional to time and inversely proportional to the number of oscillators. We also show there is a relatively fixed breakdown point, independent of many parameters of the model, past which noise overwhelms the spatial signal. Based on this result, we show that a pair of oscillators are expected to maintain a stable grid for approximately t = 5µ3/(4πσ)2 seconds where µ is the mean period of an oscillator in seconds and σ2 its variance in seconds2. We apply this criterion to recordings of individual persistent spiking neurons in postsubiculum (dorsal presubiculum) and layers III and V of entorhinal cortex, to subthreshold membrane potential oscillation recordings in layer II stellate cells of medial entorhinal cortex and to values from the literature regarding medial septum theta bursting cells. All oscillators examined have expected stability times far below those seen in experimental recordings of grid cells, suggesting the examined biological oscillators are unfit as a substrate for current implementations of oscillatory interference models. However, oscillatory interference models can tolerate small amounts of noise, suggesting the utility of circuit level effects which might reduce oscillator variability. Further implications for grid cell models are discussed

    Neural models that convince: Model hierarchies and other strategies to bridge the gap between behavior and the brain.

    Get PDF
    Computational modeling of the brain holds great promise as a bridge from brain to behavior. To fulfill this promise, however, it is not enough for models to be 'biologically plausible': models must be structurally accurate. Here, we analyze what this entails for so-called psychobiological models, models that address behavior as well as brain function in some detail. Structural accuracy may be supported by (1) a model's a priori plausibility, which comes from a reliance on evidence-based assumptions, (2) fitting existing data, and (3) the derivation of new predictions. All three sources of support require modelers to be explicit about the ontology of the model, and require the existence of data constraining the modeling. For situations in which such data are only sparsely available, we suggest a new approach. If several models are constructed that together form a hierarchy of models, higher-level models can be constrained by lower-level models, and low-level models can be constrained by behavioral features of the higher-level models. Modeling the same substrate at different levels of representation, as proposed here, thus has benefits that exceed the merits of each model in the hierarchy on its own

    An analysis of waves underlying grid cell firing in the medial enthorinal cortex

    Get PDF
    Layer II stellate cells in the medial enthorinal cortex (MEC) express hyperpolarisation-activated cyclic-nucleotide-gated (HCN) channels that allow for rebound spiking via an I_h current in response to hyperpolarising synaptic input. A computational modelling study by Hasselmo [2013 Neuronal rebound spiking, resonance frequency and theta cycle skipping may contribute to grid cell firing in medial entorhinal cortex. Phil. Trans. R. Soc. B 369: 20120523] showed that an inhibitory network of such cells can support periodic travelling waves with a period that is controlled by the dynamics of the I_h current. Hasselmo has suggested that these waves can underlie the generation of grid cells, and that the known difference in I_h resonance frequency along the dorsal to ventral axis can explain the observed size and spacing between grid cell firing fields. Here we develop a biophysical spiking model within a framework that allows for analytical tractability. We combine the simplicity of integrate-and-fire neurons with a piecewise linear caricature of the gating dynamics for HCN channels to develop a spiking neural field model of MEC. Using techniques primarily drawn from the field of nonsmooth dynamical systems we show how to construct periodic travelling waves, and in particular the dispersion curve that determines how wave speed varies as a function of period. This exhibits a wide range of long wavelength solutions, reinforcing the idea that rebound spiking is a candidate mechanism for generating grid cell firing patterns. Importantly we develop a wave stability analysis to show how the maximum allowed period is controlled by the dynamical properties of the I_h current. Our theoretical work is validated by numerical simulations of the spiking model in both one and two dimensions
    corecore