11,805 research outputs found

    Stability of neuronal networks with homeostatic regulation

    Get PDF
    Neurons are equipped with homeostatic mechanisms that counteract long-term perturbations of their average activity and thereby keep neurons in a healthy and information-rich operating regime. While homeostasis is believed to be crucial for neural function, a systematic analysis of homeostatic control has largely been lacking. The analysis presented here analyses the necessary conditions for stable homeostatic control. We consider networks of neurons with homeostasis and show that homeostatic control that is stable for single neurons, can destabilize activity in otherwise stable recurrent networks leading to strong non-abating oscillations in the activity. This instability can be prevented by slowing down the homeostatic control. The stronger the network recurrence, the slower the homeostasis has to be. Next, we consider how non-linearities in the neural activation function affect these constraints. Finally, we consider the case that homeostatic feedback is mediated via a cascade of multiple intermediate stages. Counter-intuitively, the addition of extra stages in the homeostatic control loop further destabilizes activity in single neurons and networks. Our theoretical framework for homeostasis thus reveals previously unconsidered constraints on homeostasis in biological networks, and identifies conditions that require the slow time-constants of homeostatic regulation observed experimentally

    The malleable brain: plasticity of neural circuits and behavior: A review from students to students

    Get PDF
    One of the most intriguing features of the brain is its ability to be malleable, allowing it to adapt continually to changes in the environment. Specific neuronal activity patterns drive long-lasting increases or decreases in the strength of synaptic connections, referred to as long-term potentiation (LTP) and long-term depression (LTD) respectively. Such phenomena have been described in a variety of model organisms, which are used to study molecular, structural, and functional aspects of synaptic plasticity. This review originated from the first International Society for Neurochemistry (ISN) and Journal of Neurochemistry (JNC) Flagship School held in Alpbach, Austria (Sep 2016), and will use its curriculum and discussions as a framework to review some of the current knowledge in the field of synaptic plasticity. First, we describe the role of plasticity during development and the persistent changes of neural circuitry occurring when sensory input is altered during critical developmental stages. We then outline the signaling cascades resulting in the synthesis of new plasticity-related proteins, which ultimately enable sustained changes in synaptic strength. Going beyond the traditional understanding of synaptic plasticity conceptualized by LTP and LTD, we discuss system-wide modifications and recently unveiled homeostatic mechanisms, such as synaptic scaling. Finally, we describe the neural circuits and synaptic plasticity mechanisms driving associative memory and motor learning. Evidence summarized in this review provides a current view of synaptic plasticity in its various forms, offers new insights into the underlying mechanisms and behavioral relevance, and provides directions for future research in the field of synaptic plasticity.Fil: Schaefer, Natascha. University of Wuerzburg; AlemaniaFil: Rotermund, Carola. University of Tuebingen; AlemaniaFil: Blumrich, Eva Maria. Universitat Bremen; AlemaniaFil: Lourenco, Mychael V.. Universidade Federal do Rio de Janeiro; BrasilFil: Joshi, Pooja. Robert Debre Hospital; FranciaFil: Hegemann, Regina U.. University of Otago; Nueva ZelandaFil: Jamwal, Sumit. ISF College of Pharmacy; IndiaFil: Ali, Nilufar. Augusta University; Estados UnidosFil: García Romero, Ezra Michelet. Universidad Veracruzana; MéxicoFil: Sharma, Sorabh. Birla Institute of Technology and Science; IndiaFil: Ghosh, Shampa. Indian Council of Medical Research; IndiaFil: Sinha, Jitendra K.. Indian Council of Medical Research; IndiaFil: Loke, Hannah. Hudson Institute of Medical Research; AustraliaFil: Jain, Vishal. Defence Institute of Physiology and Allied Sciences; IndiaFil: Lepeta, Katarzyna. Polish Academy of Sciences; ArgentinaFil: Salamian, Ahmad. Polish Academy of Sciences; ArgentinaFil: Sharma, Mahima. Polish Academy of Sciences; ArgentinaFil: Golpich, Mojtaba. University Kebangsaan Malaysia Medical Centre; MalasiaFil: Nawrotek, Katarzyna. University Of Lodz; ArgentinaFil: Paid, Ramesh K.. Indian Institute of Chemical Biology; IndiaFil: Shahidzadeh, Sheila M.. Syracuse University; Estados UnidosFil: Piermartiri, Tetsade. Universidade Federal de Santa Catarina; BrasilFil: Amini, Elham. University Kebangsaan Malaysia Medical Centre; MalasiaFil: Pastor, Verónica. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Houssay. Instituto de Biología Celular y Neurociencia ; ArgentinaFil: Wilson, Yvette. University of Melbourne; AustraliaFil: Adeniyi, Philip A.. Afe Babalola University; NigeriaFil: Datusalia, Ashok K.. National Brain Research Centre; IndiaFil: Vafadari, Benham. Polish Academy of Sciences; ArgentinaFil: Saini, Vedangana. University of Nebraska; Estados UnidosFil: Suárez Pozos, Edna. Instituto Politécnico Nacional; MéxicoFil: Kushwah, Neetu. Defence Institute of Physiology and Allied Sciences; IndiaFil: Fontanet, Paula. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Houssay. Instituto de Biología Celular y Neurociencia ; ArgentinaFil: Turner, Anthony J.. University of Leeds; Reino Unid

    Homeostatic plasticity and external input shape neural network dynamics

    Full text link
    In vitro and in vivo spiking activity clearly differ. Whereas networks in vitro develop strong bursts separated by periods of very little spiking activity, in vivo cortical networks show continuous activity. This is puzzling considering that both networks presumably share similar single-neuron dynamics and plasticity rules. We propose that the defining difference between in vitro and in vivo dynamics is the strength of external input. In vitro, networks are virtually isolated, whereas in vivo every brain area receives continuous input. We analyze a model of spiking neurons in which the input strength, mediated by spike rate homeostasis, determines the characteristics of the dynamical state. In more detail, our analytical and numerical results on various network topologies show consistently that under increasing input, homeostatic plasticity generates distinct dynamic states, from bursting, to close-to-critical, reverberating and irregular states. This implies that the dynamic state of a neural network is not fixed but can readily adapt to the input strengths. Indeed, our results match experimental spike recordings in vitro and in vivo: the in vitro bursting behavior is consistent with a state generated by very low network input (< 0.1%), whereas in vivo activity suggests that on the order of 1% recorded spikes are input-driven, resulting in reverberating dynamics. Importantly, this predicts that one can abolish the ubiquitous bursts of in vitro preparations, and instead impose dynamics comparable to in vivo activity by exposing the system to weak long-term stimulation, thereby opening new paths to establish an in vivo-like assay in vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.

    A Fast-Slow Analysis of the Dynamics of REM Sleep

    Full text link
    Waking and sleep states are regulated by the coordinated activity of a number of neuronal population in the brainstem and hypothalamus whose synaptic interactions compose a sleep-wake regulatory network. Physiologically based mathematical models of the sleep-wake regulatory network contain mechanisms operating on multiple time scales including relatively fast synaptic-based interations between neuronal populations, and much slower homeostatic and circadian processes that modulate sleep-wake temporal patterning. In this study, we exploit the naturally arising slow time scale of the homeostatic sleep drive in a reduced sleep-wake regulatory network model to utilize fast-slow analysis to investigate the dynamics of rapid eye movement (REM) sleep regulation. The network model consists of a reduced number of wake-, non-REM (NREM) sleep-, and REM sleep-promoting neuronal populations with the synaptic interactions reflecting the mutually inhibitory flip-flop conceptual model for sleep-wake regulation and the reciprocal interaction model for REM sleep regulation. Network dynamics regularly alternate between wake and sleep states as goverend by the slow homeostatic sleep drive. By varying a parameter associated with the activation of the REM-promoting population, we cause REM dynamics during sleep episodes to vary from supression to single activations to regular REM-NREM cycling, corresponding to changes in REM patterning induced by circadian modulation and observed in different mammalian species. We also utilize fast-slow analysis to explain complex effects on sleep-wake patterning of simulated experiments in which agonists and antagonists of different neurotransmitters are microinjected into specific neuronal populations participating in the sleep-wake regulatory network

    Regulation of neuronal excitability through pumilio-dependent control of a sodium channel gene

    Get PDF
    Dynamic changes in synaptic connectivity and strength, which occur during both embryonic development and learning, have the tendency to destabilize neural circuits. To overcome this, neurons have developed a diversity of homeostatic mechanisms to maintain firing within physiologically defined limits. In this study, we show that activity-dependent control of mRNA for a specific voltage-gated Na+ channel [encoded by paralytic (para)] contributes to the regulation of membrane excitability in Drosophila motoneurons. Quantification of para mRNA, by real-time reverse-transcription PCR, shows that levels are significantly decreased in CNSs in which synaptic excitation is elevated, whereas, conversely, they are significantly increased when synaptic vesicle release is blocked. Quantification of mRNA encoding the translational repressor pumilio (pum) reveals a reciprocal regulation to that seen for para. Pumilio is sufficient to influence para mRNA. Thus, para mRNA is significantly elevated in a loss-of-function allele of pum (pumbemused), whereas expression of a full-length pum transgene is sufficient to reduce para mRNA. In the absence of pum, increased synaptic excitation fails to reduce para mRNA, showing that Pum is also necessary for activity-dependent regulation of para mRNA. Analysis of voltage-gated Na+ current (INa) mediated by para in two identified motoneurons (termed aCC and RP2) reveals that removal of pum is sufficient to increase one of two separable INa components (persistent INa), whereas overexpression of a pum transgene is sufficient to suppress both components (transient and persistent). We show, through use of anemone toxin (ATX II), that alteration in persistent INa is sufficient to regulate membrane excitability in these two motoneurons

    Logarithmic distributions prove that intrinsic learning is Hebbian

    Full text link
    In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability
    • …
    corecore