890 research outputs found

    Mechanisms of Zero-Lag Synchronization in Cortical Motifs

    Get PDF
    Zero-lag synchronization between distant cortical areas has been observed in a diversity of experimental data sets and between many different regions of the brain. Several computational mechanisms have been proposed to account for such isochronous synchronization in the presence of long conduction delays: Of these, the phenomenon of "dynamical relaying" - a mechanism that relies on a specific network motif - has proven to be the most robust with respect to parameter mismatch and system noise. Surprisingly, despite a contrary belief in the community, the common driving motif is an unreliable means of establishing zero-lag synchrony. Although dynamical relaying has been validated in empirical and computational studies, the deeper dynamical mechanisms and comparison to dynamics on other motifs is lacking. By systematically comparing synchronization on a variety of small motifs, we establish that the presence of a single reciprocally connected pair - a "resonance pair" - plays a crucial role in disambiguating those motifs that foster zero-lag synchrony in the presence of conduction delays (such as dynamical relaying) from those that do not (such as the common driving triad). Remarkably, minor structural changes to the common driving motif that incorporate a reciprocal pair recover robust zero-lag synchrony. The findings are observed in computational models of spiking neurons, populations of spiking neurons and neural mass models, and arise whether the oscillatory systems are periodic, chaotic, noise-free or driven by stochastic inputs. The influence of the resonance pair is also robust to parameter mismatch and asymmetrical time delays amongst the elements of the motif. We call this manner of facilitating zero-lag synchrony resonance-induced synchronization, outline the conditions for its occurrence, and propose that it may be a general mechanism to promote zero-lag synchrony in the brain.Comment: 41 pages, 12 figures, and 11 supplementary figure

    Dynamic excitatory and inhibitory gain modulation can produce flexible, robust and optimal decision-making

    Get PDF
    <div><p>Behavioural and neurophysiological studies in primates have increasingly shown the involvement of urgency signals during the temporal integration of sensory evidence in perceptual decision-making. Neuronal correlates of such signals have been found in the parietal cortex, and in separate studies, demonstrated attention-induced gain modulation of both excitatory and inhibitory neurons. Although previous computational models of decision-making have incorporated gain modulation, their abstract forms do not permit an understanding of the contribution of inhibitory gain modulation. Thus, the effects of co-modulating both excitatory and inhibitory neuronal gains on decision-making dynamics and behavioural performance remain unclear. In this work, we incorporate time-dependent co-modulation of the gains of both excitatory and inhibitory neurons into our previous biologically based decision circuit model. We base our computational study in the context of two classic motion-discrimination tasks performed in animals. Our model shows that by simultaneously increasing the gains of both excitatory and inhibitory neurons, a variety of the observed dynamic neuronal firing activities can be replicated. In particular, the model can exhibit winner-take-all decision-making behaviour with higher firing rates and within a significantly more robust model parameter range. It also exhibits short-tailed reaction time distributions even when operating near a dynamical bifurcation point. The model further shows that neuronal gain modulation can compensate for weaker recurrent excitation in a decision neural circuit, and support decision formation and storage. Higher neuronal gain is also suggested in the more cognitively demanding reaction time than in the fixed delay version of the task. Using the exact temporal delays from the animal experiments, fast recruitment of gain co-modulation is shown to maximize reward rate, with a timescale that is surprisingly near the experimentally fitted value. Our work provides insights into the simultaneous and rapid modulation of excitatory and inhibitory neuronal gains, which enables flexible, robust, and optimal decision-making.</p></div

    A view of Neural Networks as dynamical systems

    Full text link
    We consider neural networks from the point of view of dynamical systems theory. In this spirit we review recent results dealing with the following questions, adressed in the context of specific models. 1. Characterizing the collective dynamics; 2. Statistical analysis of spikes trains; 3. Interplay between dynamics and network structure; 4. Effects of synaptic plasticity.Comment: Review paper, 51 pages, 10 figures. submitte

    Oscillations in routing and chaos

    Get PDF

    Analyses at microscopic, mesoscopic, and mean-field scales

    Get PDF
    Die Aktivität des Hippocampus im Tiefschlaf ist geprägt durch sharp wave-ripple Komplexe (SPW-R): kurze (50–100 ms) Phasen mit erhöhter neuronaler Aktivität, moduliert durch eine schnelle “Ripple”-Oszillation (140–220 Hz). SPW-R werden mit Gedächtniskonsolidierung in Verbindung gebracht, aber ihr Ursprung ist unklar. Sowohl exzitatorische als auch inhibitorische Neuronpopulationen könnten die Oszillation generieren. Diese Arbeit analysiert Ripple-Oszillationen in inhibitorischen Netzwerkmodellen auf mikro-, meso- und makroskopischer Ebene und zeigt auf, wie die Ripple-Dynamik von exzitatorischem Input, inhibitorischer Kopplungsstärke und dem Rauschmodell abhängt. Zuerst wird ein stark getriebenes Interneuron-Netzwerk mit starker, verzögerter Kopplung analysiert. Es wird eine Theorie entwickelt, die die Drift-bedingte Feuerdynamik im Mean-field Grenzfall beschreibt. Die Ripple-Frequenz und die Dynamik der Membranpotentiale werden analytisch als Funktion des Inputs und der Netzwerkparameter angenähert. Die Theorie erklärt, warum die Ripple-Frequenz im Verlauf eines SPW-R-Ereignisses sinkt (intra-ripple frequency accommodation, IFA). Weiterhin zeigt eine numerische Analyse, dass ein alternatives Modell, basierend auf einem transienten Störungseffekt in einer schwach gekoppelten Interneuron-Population, unter biologisch plausiblen Annahmen keine IFA erzeugen kann. IFA kann somit zur Modellauswahl beitragen und deutet auf starke, verzögerte inhibitorische Kopplung als plausiblen Mechanismus hin. Schließlich wird die Anwendbarkeit eines kürzlich entwickelten mesoskopischen Ansatzes für die effiziente Simulation von Ripples in endlich großen Netzwerken geprüft. Dabei wird das Rauschen nicht im Input der Neurone beschrieben, sondern als stochastisches Feuern entsprechend einer Hazard-Rate. Es wird untersucht, wie die Wahl des Hazards die dynamische Suszeptibilität einzelner Neurone, und damit die Ripple-Dynamik in rekurrenten Interneuron-Netzwerken beeinflusst.Hippocampal activity during sleep or rest is characterized by sharp wave-ripples (SPW-Rs): transient (50–100 ms) periods of elevated neuronal activity modulated by a fast oscillation — the ripple (140–220 Hz). SPW-Rs have been linked to memory consolidation, but their generation mechanism remains unclear. Multiple potential mechanisms have been proposed, relying on excitation and/or inhibition as the main pacemaker. This thesis analyzes ripple oscillations in inhibitory network models at micro-, meso-, and macroscopic scales and elucidates how the ripple dynamics depends on the excitatory drive, inhibitory coupling strength, and the noise model. First, an interneuron network under strong drive and strong coupling with delay is analyzed. A theory is developed that captures the drift-mediated spiking dynamics in the mean-field limit. The ripple frequency as well as the underlying dynamics of the membrane potential distribution are approximated analytically as a function of the external drive and network parameters. The theory explains why the ripple frequency decreases over the course of an event (intra-ripple frequency accommodation, IFA). Furthermore, numerical analysis shows that an alternative inhibitory ripple model, based on a transient ringing effect in a weakly coupled interneuron population, cannot account for IFA under biologically realistic assumptions. IFA can thus guide model selection and provides new support for strong, delayed inhibitory coupling as a mechanism for ripple generation. Finally, a recently proposed mesoscopic integration scheme is tested as a potential tool for the efficient numerical simulation of ripple dynamics in networks of finite size. This approach requires a switch of the noise model, from noisy input to stochastic output spiking mediated by a hazard function. It is demonstrated how the choice of a hazard function affects the linear response of single neurons and therefore the ripple dynamics in a recurrent interneuron network

    Neural network mechanisms of working memory interference

    Get PDF
    [eng] Our ability to memorize is at the core of our cognitive abilities. How could we effectively make decisions without considering memories of previous experiences? Broadly, our memories can be divided in two categories: long-term and short-term memories. Sometimes, short-term memory is also called working memory and throughout this thesis I will use both terms interchangeably. As the names suggest, long-term memory is the memory you use when you remember concepts for a long time, such as your name or age, while short-term memory is the system you engage while choosing between different wines at the liquor store. As your attention jumps from one bottle to another, you need to hold in memory characteristics of previous ones to pick your favourite. By the time you pick your favourite bottle, you might remember the prices or grape types of the other bottles, but you are likely to forget all of those details an hour later at home, opening the wine in front of your guests. The overall goal of this thesis is to study the neural mechanisms that underlie working memory interference, as reflected in quantitative, systematic behavioral biases. Ultimately, the goal of each chapter, even when focused exclusively on behavioral experiments, is to nail down plausible neural mechanisms that can produce specific behavioral and neurophysiological findings. To this end, we use the bump-attractor model as our working hypothesis, with which we often contrast the synaptic working memory model. The work performed during this thesis is described here in 3 main chapters, encapsulation 5 broad goals: In Chapter 4.1, we aim at testing behavioral predictions of a bump-attractor (1) network when used to store multiple items. Moreover, we connected two of such networks aiming to model feature-binding through selectivity synchronization (2). In Chapter 4.2, we aim to clarify the mechanisms of working memory interference from previous memories (3), the so-called serial biases. These biases provide an excellent opportunity to contrast activity-based and activity-silent mechanisms because both mechanisms have been proposed to be the underlying cause of those biases. In Chapter 4.3, armed with the same techniques used to seek evidence for activity-silent mechanisms, we test a prediction of the bump-attractor model with short-term plasticity (4). Finally, in light of the results from aim 4 and simple computer simulations, we reinterpret previous studies claiming evidence for activity-silent mechanisms (5)
    • …
    corecore