7,347 research outputs found

    Homeostatic plasticity and external input shape neural network dynamics

    Full text link
    In vitro and in vivo spiking activity clearly differ. Whereas networks in vitro develop strong bursts separated by periods of very little spiking activity, in vivo cortical networks show continuous activity. This is puzzling considering that both networks presumably share similar single-neuron dynamics and plasticity rules. We propose that the defining difference between in vitro and in vivo dynamics is the strength of external input. In vitro, networks are virtually isolated, whereas in vivo every brain area receives continuous input. We analyze a model of spiking neurons in which the input strength, mediated by spike rate homeostasis, determines the characteristics of the dynamical state. In more detail, our analytical and numerical results on various network topologies show consistently that under increasing input, homeostatic plasticity generates distinct dynamic states, from bursting, to close-to-critical, reverberating and irregular states. This implies that the dynamic state of a neural network is not fixed but can readily adapt to the input strengths. Indeed, our results match experimental spike recordings in vitro and in vivo: the in vitro bursting behavior is consistent with a state generated by very low network input (< 0.1%), whereas in vivo activity suggests that on the order of 1% recorded spikes are input-driven, resulting in reverberating dynamics. Importantly, this predicts that one can abolish the ubiquitous bursts of in vitro preparations, and instead impose dynamics comparable to in vivo activity by exposing the system to weak long-term stimulation, thereby opening new paths to establish an in vivo-like assay in vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.

    Interacting Turing-Hopf Instabilities Drive Symmetry-Breaking Transitions in a Mean-Field Model of the Cortex: A Mechanism for the Slow Oscillation

    Get PDF
    Electrical recordings of brain activity during the transition from wake to anesthetic coma show temporal and spectral alterations that are correlated with gross changes in the underlying brain state. Entry into anesthetic unconsciousness is signposted by the emergence of large, slow oscillations of electrical activity (≲1  Hz) similar to the slow waves observed in natural sleep. Here we present a two-dimensional mean-field model of the cortex in which slow spatiotemporal oscillations arise spontaneously through a Turing (spatial) symmetry-breaking bifurcation that is modulated by a Hopf (temporal) instability. In our model, populations of neurons are densely interlinked by chemical synapses, and by interneuronal gap junctions represented as an inhibitory diffusive coupling. To demonstrate cortical behavior over a wide range of distinct brain states, we explore model dynamics in the vicinity of a general-anesthetic-induced transition from “wake” to “coma.” In this region, the system is poised at a codimension-2 point where competing Turing and Hopf instabilities coexist. We model anesthesia as a moderate reduction in inhibitory diffusion, paired with an increase in inhibitory postsynaptic response, producing a coma state that is characterized by emergent low-frequency oscillations whose dynamics is chaotic in time and space. The effect of long-range axonal white-matter connectivity is probed with the inclusion of a single idealized point-to-point connection. We find that the additional excitation from the long-range connection can provoke seizurelike bursts of cortical activity when inhibitory diffusion is weak, but has little impact on an active cortex. Our proposed dynamic mechanism for the origin of anesthetic slow waves complements—and contrasts with—conventional explanations that require cyclic modulation of ion-channel conductances. We postulate that a similar bifurcation mechanism might underpin the slow waves of natural sleep and comment on the possible consequences of chaotic dynamics for memory processing and learning

    Interacting Turing-Hopf Instabilities Drive Symmetry-Breaking Transitions in a Mean-Field Model of the Cortex: A Mechanism for the Slow Oscillation

    Get PDF
    Electrical recordings of brain activity during the transition from wake to anesthetic coma show temporal and spectral alterations that are correlated with gross changes in the underlying brain state. Entry into anesthetic unconsciousness is signposted by the emergence of large, slow oscillations of electrical activity (≲1  Hz) similar to the slow waves observed in natural sleep. Here we present a two-dimensional mean-field model of the cortex in which slow spatiotemporal oscillations arise spontaneously through a Turing (spatial) symmetry-breaking bifurcation that is modulated by a Hopf (temporal) instability. In our model, populations of neurons are densely interlinked by chemical synapses, and by interneuronal gap junctions represented as an inhibitory diffusive coupling. To demonstrate cortical behavior over a wide range of distinct brain states, we explore model dynamics in the vicinity of a general-anesthetic-induced transition from “wake” to “coma.” In this region, the system is poised at a codimension-2 point where competing Turing and Hopf instabilities coexist. We model anesthesia as a moderate reduction in inhibitory diffusion, paired with an increase in inhibitory postsynaptic response, producing a coma state that is characterized by emergent low-frequency oscillations whose dynamics is chaotic in time and space. The effect of long-range axonal white-matter connectivity is probed with the inclusion of a single idealized point-to-point connection. We find that the additional excitation from the long-range connection can provoke seizurelike bursts of cortical activity when inhibitory diffusion is weak, but has little impact on an active cortex. Our proposed dynamic mechanism for the origin of anesthetic slow waves complements—and contrasts with—conventional explanations that require cyclic modulation of ion-channel conductances. We postulate that a similar bifurcation mechanism might underpin the slow waves of natural sleep and comment on the possible consequences of chaotic dynamics for memory processing and learning

    The malleable brain: plasticity of neural circuits and behavior: A review from students to students

    Get PDF
    One of the most intriguing features of the brain is its ability to be malleable, allowing it to adapt continually to changes in the environment. Specific neuronal activity patterns drive long-lasting increases or decreases in the strength of synaptic connections, referred to as long-term potentiation (LTP) and long-term depression (LTD) respectively. Such phenomena have been described in a variety of model organisms, which are used to study molecular, structural, and functional aspects of synaptic plasticity. This review originated from the first International Society for Neurochemistry (ISN) and Journal of Neurochemistry (JNC) Flagship School held in Alpbach, Austria (Sep 2016), and will use its curriculum and discussions as a framework to review some of the current knowledge in the field of synaptic plasticity. First, we describe the role of plasticity during development and the persistent changes of neural circuitry occurring when sensory input is altered during critical developmental stages. We then outline the signaling cascades resulting in the synthesis of new plasticity-related proteins, which ultimately enable sustained changes in synaptic strength. Going beyond the traditional understanding of synaptic plasticity conceptualized by LTP and LTD, we discuss system-wide modifications and recently unveiled homeostatic mechanisms, such as synaptic scaling. Finally, we describe the neural circuits and synaptic plasticity mechanisms driving associative memory and motor learning. Evidence summarized in this review provides a current view of synaptic plasticity in its various forms, offers new insights into the underlying mechanisms and behavioral relevance, and provides directions for future research in the field of synaptic plasticity.Fil: Schaefer, Natascha. University of Wuerzburg; AlemaniaFil: Rotermund, Carola. University of Tuebingen; AlemaniaFil: Blumrich, Eva Maria. Universitat Bremen; AlemaniaFil: Lourenco, Mychael V.. Universidade Federal do Rio de Janeiro; BrasilFil: Joshi, Pooja. Robert Debre Hospital; FranciaFil: Hegemann, Regina U.. University of Otago; Nueva ZelandaFil: Jamwal, Sumit. ISF College of Pharmacy; IndiaFil: Ali, Nilufar. Augusta University; Estados UnidosFil: García Romero, Ezra Michelet. Universidad Veracruzana; MéxicoFil: Sharma, Sorabh. Birla Institute of Technology and Science; IndiaFil: Ghosh, Shampa. Indian Council of Medical Research; IndiaFil: Sinha, Jitendra K.. Indian Council of Medical Research; IndiaFil: Loke, Hannah. Hudson Institute of Medical Research; AustraliaFil: Jain, Vishal. Defence Institute of Physiology and Allied Sciences; IndiaFil: Lepeta, Katarzyna. Polish Academy of Sciences; ArgentinaFil: Salamian, Ahmad. Polish Academy of Sciences; ArgentinaFil: Sharma, Mahima. Polish Academy of Sciences; ArgentinaFil: Golpich, Mojtaba. University Kebangsaan Malaysia Medical Centre; MalasiaFil: Nawrotek, Katarzyna. University Of Lodz; ArgentinaFil: Paid, Ramesh K.. Indian Institute of Chemical Biology; IndiaFil: Shahidzadeh, Sheila M.. Syracuse University; Estados UnidosFil: Piermartiri, Tetsade. Universidade Federal de Santa Catarina; BrasilFil: Amini, Elham. University Kebangsaan Malaysia Medical Centre; MalasiaFil: Pastor, Verónica. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Houssay. Instituto de Biología Celular y Neurociencia ; ArgentinaFil: Wilson, Yvette. University of Melbourne; AustraliaFil: Adeniyi, Philip A.. Afe Babalola University; NigeriaFil: Datusalia, Ashok K.. National Brain Research Centre; IndiaFil: Vafadari, Benham. Polish Academy of Sciences; ArgentinaFil: Saini, Vedangana. University of Nebraska; Estados UnidosFil: Suárez Pozos, Edna. Instituto Politécnico Nacional; MéxicoFil: Kushwah, Neetu. Defence Institute of Physiology and Allied Sciences; IndiaFil: Fontanet, Paula. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Houssay. Instituto de Biología Celular y Neurociencia ; ArgentinaFil: Turner, Anthony J.. University of Leeds; Reino Unid

    Role of homeostasis in learning sparse representations

    Full text link
    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components

    Connectivity reflects coding: A model of voltage-based spike-timing-dependent-plasticity with homeostasis

    Get PDF
    Electrophysiological connectivity patterns in cortex often show a few strong connections in a sea of weak connections. In some brain areas a large fraction of strong connections are bidirectional, in others they are mainly unidirectional. In order to explain these connectivity patterns, we use a model of Spike-Timing-Dependent Plasticity where synaptic changes depend on presynaptic spike arrival and the postsynaptic membrane potential. The model describes several nonlinear effects in STDP experiments, as well as the voltage dependence of plasticity under voltage clamp and classical paradigms of LTP/LTD induction. We show that in a simulated recurrent network of spiking neurons our plasticity rule leads not only to receptive field development, but also to connectivity patterns that reflect the neural code: for temporal coding paradigms strong connections are predominantly unidirectional, whereas they are bidirectional under rate coding. Thus variable connectivity patterns in the brain could reflect different coding principles across brain areas

    A roadmap to integrate astrocytes into Systems Neuroscience.

    Get PDF
    Systems neuroscience is still mainly a neuronal field, despite the plethora of evidence supporting the fact that astrocytes modulate local neural circuits, networks, and complex behaviors. In this article, we sought to identify which types of studies are necessary to establish whether astrocytes, beyond their well-documented homeostatic and metabolic functions, perform computations implementing mathematical algorithms that sub-serve coding and higher-brain functions. First, we reviewed Systems-like studies that include astrocytes in order to identify computational operations that these cells may perform, using Ca2+ transients as their encoding language. The analysis suggests that astrocytes may carry out canonical computations in a time scale of subseconds to seconds in sensory processing, neuromodulation, brain state, memory formation, fear, and complex homeostatic reflexes. Next, we propose a list of actions to gain insight into the outstanding question of which variables are encoded by such computations. The application of statistical analyses based on machine learning, such as dimensionality reduction and decoding in the context of complex behaviors, combined with connectomics of astrocyte-neuronal circuits, is, in our view, fundamental undertakings. We also discuss technical and analytical approaches to study neuronal and astrocytic populations simultaneously, and the inclusion of astrocytes in advanced modeling of neural circuits, as well as in theories currently under exploration such as predictive coding and energy-efficient coding. Clarifying the relationship between astrocytic Ca2+ and brain coding may represent a leap forward toward novel approaches in the study of astrocytes in health and disease
    corecore