512 research outputs found

    Design of continuous attractor networks with monotonic tuning using a symmetry principle

    Get PDF
    Neurons that sustain elevated firing in the absence of stimuli have been found in many neural systems. In graded persistent activity, neurons can sustain firing at many levels, suggesting a widely found type of network dynamics in which networks can relax to any one of a continuum of stationary states. The reproduction of these findings in model networks of nonlinear neurons has turned out to be nontrivial. A particularly insightful model has been the "bump attractor," in which a continuous attractor emerges through an underlying symmetry in the network connectivity matrix. This model, however, cannot account for data in which the persistent firing of neurons is a monotonic-rather than a bell-shaped-function of a stored variable. Here, we show that the symmetry used in the bump attractor network can be employed to create a whole family of continuous attractor networks, including those with monotonic tuning. Our design is based on tuning the external inputs to networks that have a connectivity matrix with Toeplitz symmetry. In particular, we provide a complete analytical solution of a line attractor network with monotonic tuning and show that for many other networks, the numerical tuning of synaptic weights reduces to the computation of a single parameter

    Optogenetic perturbations reveal the dynamics of an oculomotor integrator

    Get PDF
    Many neural systems can store short-term information in persistently firing neurons. Such persistent activity is believed to be maintained by recurrent feedback among neurons. This hypothesis has been fleshed out in detail for the oculomotor integrator (OI) for which the so-called “line attractor” network model can explain a large set of observations. Here we show that there is a plethora of such models, distinguished by the relative strength of recurrent excitation and inhibition. In each model, the firing rates of the neurons relax toward the persistent activity states. The dynamics of relaxation can be quite different, however, and depend on the levels of recurrent excitation and inhibition. To identify the correct model, we directly measure these relaxation dynamics by performing optogenetic perturbations in the OI of zebrafish expressing halorhodopsin or channelrhodopsin. We show that instantaneous, inhibitory stimulations of the OI lead to persistent, centripetal eye position changes ipsilateral to the stimulation. Excitatory stimulations similarly cause centripetal eye position changes, yet only contralateral to the stimulation. These results show that the dynamics of the OI are organized around a central attractor state—the null position of the eyes—which stabilizes the system against random perturbations. Our results pose new constraints on the circuit connectivity of the system and provide new insights into the mechanisms underlying persistent activity

    Space, time and memory in the medial temporal lobe

    Get PDF
    This thesis focuses on memory and the representation of space in the medial temporal lobe, their interaction and their temporal structure. Chapter 1 briefly introduces the topic, with emphasis on the open questions that the subsequent chapters aim to address. Chapter 2 is dedicated to the issue of spatial memory in the medial entorhinal cortex. It investigates the possibility to store multiple independent maps in a recurrent network of grid cells, from a theoretical perspective. This work was conducted in collaboration with Remi Monasson, Alexis Dubreuil and Sophie Rosay and is published in (Spalla et al. 2019). Chapter 3 focuses on the problem of the dynamical update of the representation of space during navigation. It presents the results of the analysis of electrophysiological data, previously collected by Charlotte Boccara (Boccara et al., 2010), investigating the encoding of self-movement signals (speed and angular velocity of the head) in the parahippocampal region of rats. Chapter 4 addresses the problem of the temporal dynamics of memory retrieval, again from a computational point of view. A continuous attractor network model is presented, endowed with a mechanism that makes it able to retrieve continuous temporal sequences. The dynamical behaviour of the system is investigated with analytical calculations and numerical simulations, and the storage capacity for dynamical memories is computed. Finally, chapter 4 discusses the meaning and the scope of the results presented, and highlights possible future directions

    A Moving Bump in a Continuous Manifold: A Comprehensive Study of the Tracking Dynamics of Continuous Attractor Neural Networks

    Full text link
    Understanding how the dynamics of a neural network is shaped by the network structure, and consequently how the network structure facilitates the functions implemented by the neural system, is at the core of using mathematical models to elucidate brain functions. This study investigates the tracking dynamics of continuous attractor neural networks (CANNs). Due to the translational invariance of neuronal recurrent interactions, CANNs can hold a continuous family of stationary states. They form a continuous manifold in which the neural system is neutrally stable. We systematically explore how this property facilitates the tracking performance of a CANN, which is believed to have clear correspondence with brain functions. By using the wave functions of the quantum harmonic oscillator as the basis, we demonstrate how the dynamics of a CANN is decomposed into different motion modes, corresponding to distortions in the amplitude, position, width or skewness of the network state. We then develop a perturbative approach that utilizes the dominating movement of the network's stationary states in the state space. This method allows us to approximate the network dynamics up to an arbitrary accuracy depending on the order of perturbation used. We quantify the distortions of a Gaussian bump during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable and the reaction time for the network to catch up with an abrupt change in the stimulus.Comment: 43 pages, 10 figure

    Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron

    Full text link
    In this article the framework for Parisi's spontaneous replica symmetry breaking is reviewed, and subsequently applied to the example of the statistical mechanical description of the storage properties of a McCulloch-Pitts neuron. The technical details are reviewed extensively, with regard to the wide range of systems where the method may be applied. Parisi's partial differential equation and related differential equations are discussed, and a Green function technique introduced for the calculation of replica averages, the key to determining the averages of physical quantities. The ensuing graph rules involve only tree graphs, as appropriate for a mean-field-like model. The lowest order Ward-Takahashi identity is recovered analytically and is shown to lead to the Goldstone modes in continuous replica symmetry breaking phases. The need for a replica symmetry breaking theory in the storage problem of the neuron has arisen due to the thermodynamical instability of formerly given solutions. Variational forms for the neuron's free energy are derived in terms of the order parameter function x(q), for different prior distribution of synapses. Analytically in the high temperature limit and numerically in generic cases various phases are identified, among them one similar to the Parisi phase in the Sherrington-Kirkpatrick model. Extensive quantities like the error per pattern change slightly with respect to the known unstable solutions, but there is a significant difference in the distribution of non-extensive quantities like the synaptic overlaps and the pattern storage stability parameter. A simulation result is also reviewed and compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi, eepic), accepted for Physics Report

    Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron

    Full text link
    In this article the framework for Parisi's spontaneous replica symmetry breaking is reviewed, and subsequently applied to the example of the statistical mechanical description of the storage properties of a McCulloch-Pitts neuron. The technical details are reviewed extensively, with regard to the wide range of systems where the method may be applied. Parisi's partial differential equation and related differential equations are discussed, and a Green function technique introduced for the calculation of replica averages, the key to determining the averages of physical quantities. The ensuing graph rules involve only tree graphs, as appropriate for a mean-field-like model. The lowest order Ward-Takahashi identity is recovered analytically and is shown to lead to the Goldstone modes in continuous replica symmetry breaking phases. The need for a replica symmetry breaking theory in the storage problem of the neuron has arisen due to the thermodynamical instability of formerly given solutions. Variational forms for the neuron's free energy are derived in terms of the order parameter function x(q), for different prior distribution of synapses. Analytically in the high temperature limit and numerically in generic cases various phases are identified, among them one similar to the Parisi phase in the Sherrington-Kirkpatrick model. Extensive quantities like the error per pattern change slightly with respect to the known unstable solutions, but there is a significant difference in the distribution of non-extensive quantities like the synaptic overlaps and the pattern storage stability parameter. A simulation result is also reviewed and compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi, eepic), accepted for Physics Report

    Delay-dependent Stability of Genetic Regulatory Networks

    Get PDF
    Genetic regulatory networks are biochemical reaction systems, consisting of a network of interacting genes and associated proteins. The dynamics of genetic regulatory networks contain many complex facets that require careful consideration during the modeling process. The classical modeling approach involves studying systems of ordinary differential equations (ODEs) that model biochemical reactions in a deterministic, continuous, and instantaneous fashion. In reality, the dynamics of these systems are stochastic, discrete, and widely delayed. The first two complications are often successfully addressed by modeling regulatory networks using the Gillespie stochastic simulation algorithm (SSA), while the delayed behavior of biochemical events such as transcription and translation are often ignored due to their mathematically difficult nature. We develop techniques based on delay-differential equations (DDEs) and the delayed Gillespie SSA to study the effects of delays, in both continuous deterministic and discrete stochastic settings. Our analysis applies techniques from Floquet theory and advanced numerical analysis within the context of delay-differential equations, and we are able to derive stability sensitivities for biochemical switches and oscillators across the constituent pathways, showing which pathways in the regulatory networks improve or worsen the stability of the system attractors. These delay sensitivities can be far from trivial, and we offer a computational framework validated across multiple levels of modeling fidelity. This work suggests that delays may play an important and previously overlooked role in providing robust dynamical behavior for certain genetic regulatory networks, and perhaps more importantly, may offer an accessible tuning parameter for robust bioengineering

    Continuous attractors for dynamic memories

    Get PDF
    Episodic memory has a dynamic nature: when we recall past episodes, we retrieve not only their content, but also their temporal structure. The phenomenon of replay, in the hippocampus of mammals, offers a remarkable example of this temporal dynamics. However, most quantitative models of memory treat memories as static configurations, neglecting the temporal unfolding of the retrieval process. Here, we introduce a continuous attractor network model with a memory-dependent asymmetric component in the synaptic connectivity, which spontaneously breaks the equilibrium of the memory configurations and produces dynamic retrieval. The detailed analysis of the model with analytical calculations and numerical simulations shows that it can robustly retrieve multiple dynamical memories, and that this feature is largely independent of the details of its implementation. By calculating the storage capacity, we show that the dynamic component does not impair memory capacity, and can even enhance it in certain regimes
    corecore