11 research outputs found

    Traveling waves of excitation in neural field models

    Get PDF
    Field models provide an elegant mathematical framework to analyze large-scale patterns of neural activity. On the microscopic level, these models are usually based on either a firing-rate picture or integrate-and-fire dynamics. This article shows that in spite of the large conceptual differences between the two types of dynamics, both generate closely related plane-wave solutions. Furthermore, for a large group of models, estimates about the network connectivity derived from the speed of these plane waves only marginally depend on the assumed class of microscopic dynamics. We derive quantitative results about this phenomenon and discuss consequences for the interpretation of experimental data

    The Dynamics of Bimodular Continuous Attractor Neural Networks with Static and Moving Stimuli

    Full text link
    The brain achieves multisensory integration by combining the information received from different sensory inputs to yield inferences with higher speed or more accuracy. We consider a bimodular neural network each processing a modality of sensory input and interacting with each other. The dynamics of excitatory and inhibitory couplings between the two modules are studied with static and moving stimuli. The modules exhibit non-trivial interactive behaviors depending on the input strengths, their disparity and speed (for moving inputs), and the inter-modular couplings. They give rise to a family of models applicable to causal inference problems in neuroscience. They also provide a model for the experiment of motion-bounce illusion, yielding consistent results and predicting their robustness.Comment: 15 pages, 12 figures, journal pape

    Scalar Reduction of a Neural Field Model with Spike Frequency Adaptation

    Full text link
    We study a deterministic version of a one- and two-dimensional attractor neural network model of hippocampal activity first studied by Itskov et al 2011. We analyze the dynamics of the system on the ring and torus domain with an even periodized weight matrix, assum- ing weak and slow spike frequency adaptation and a weak stationary input current. On these domains, we find transitions from spatially localized stationary solutions ("bumps") to (periodically modulated) solutions ("sloshers"), as well as constant and non-constant velocity traveling bumps depending on the relative strength of external input current and adaptation. The weak and slow adaptation allows for a reduction of the system from a distributed partial integro-differential equation to a system of scalar Volterra integro-differential equations describing the movement of the centroid of the bump solution. Using this reduction, we show that on both domains, sloshing solutions arise through an Andronov-Hopf bifurcation and derive a normal form for the Hopf bifurcation on the ring. We also show existence and stability of constant velocity solutions on both domains using Evans functions. In contrast to existing studies, we assume a general weight matrix of Mexican-hat type in addition to a smooth firing rate function.Comment: 60 pages, 22 figure

    A Dynamical System Approach to modeling Mental Exploration

    Get PDF
    The hippocampal-entorhinal complex plays an essential role within the brain in spatial navigation, mapping a spatial path onto a sequence of cells that reaction potentials. During rest or sleep, these sequences are replayed in either reverse or forward temporal order; in some cases, novel sequences occur that may represent paths not yet taken, but connecting contiguous spatial locations. These sequences potentially play a role in the planning of future paths. In particular, mental exploration is needed to discover short-cuts or plan alternative routes. Hopeld proposed a two-dimensional planar attractor network as a substrate for the mental exploration. He extended the concept of a line attractor used for the ocular-motor apparatus, to a planar attractor that can memorize any spatial path and then recall this path in memory. Such a planar attractor contains an infinite number of fixed points for the dynamics, each fixed point corresponding to a spatial location. For symmetric connections in the network, the dynamics generally admits a Lyapunov energy function L. Movement through different fixed points is possible because of the continuous attractor structure. In this model, a key role is played by the evolution of a localized activation of the network, a "bump", that moves across this neural sheet that topographically represents space. For this to occur, the history of paths already taken is imprinted on the synaptic couplings between the neurons. Yet attractor dynamics would seem to preclude the bump from moving; hence, a mechanism that destabilizes the bump is required. The mechanism to destabilize such an activity bump and move it to other locations of the network involves an adaptation current that provides a form of delayed inhibition. Both a spin-glass and a graded-response approach are applied to investigating the dynamics of mental exploration mathematically. Simplifying the neural network proposed by Hopfield to a spin glass, I study the problem of recalling temporal sequences and explore an alternative proposal, that relies on storing the correlation of network activity across time, adding a sequence transition term to the classical instantaneous correlation term during the learning of the synaptic "adaptation current" is interpreted as a local field that can destabilize the equilibrium causing the bump to move. We can also combine the adaptation and transition term to show how the dynamics of exploration is affected. To obtain goal-directed searching, I introduce a weak external field associated with a rewarded location. We show how the bump trajectory then follows a suitable path to get to the target. For networks of graded-response neurons with weak external stimulation, amplitude equations known from pattern formation studies in bio-chemico- physical systems are developed. This allows me to predict the modes of network activity that can be selected by an external stimulus and how these modes evolve. Using perturbation theory and coarse graining, the dynamical equations for the evolution of the system are reduced from many sets of nonlinear integro-dierential equations for each neuron to a single macroscopic equation. This equation, in particular close to the transition to pattern formation, takes the form of the Landau Ginzburg equation. The parameters for the connections between the neurons are shown to be related to the parameters of the Landau-Ginzburg equation that governs the bump of activity. The role of adaptation within this approximation is studied, which leads to the discovery that the macroscopic dynamical equation for the system has the same structure of the coupled equations used to describe the propagation of the electrical activity within one single neuron as given by the Fitzhugh-Nagumo equations
    corecore