99 research outputs found

    How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation

    Get PDF
    This paper addresses two questions in the context of neuronal networks dynamics, using methods from dynamical systems theory and statistical physics: (i) How to characterize the statistical properties of sequences of action potentials ("spike trains") produced by neuronal networks ? and; (ii) what are the effects of synaptic plasticity on these statistics ? We introduce a framework in which spike trains are associated to a coding of membrane potential trajectories, and actually, constitute a symbolic coding in important explicit examples (the so-called gIF models). On this basis, we use the thermodynamic formalism from ergodic theory to show how Gibbs distributions are natural probability measures to describe the statistics of spike trains, given the empirical averages of prescribed quantities. As a second result, we show that Gibbs distributions naturally arise when considering "slow" synaptic plasticity rules where the characteristic time for synapse adaptation is quite longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure

    3,3′-Dinitro­bis­phenol A

    Get PDF
    The title compound [systematic name: 2,2′-dinitro-4,4′-(propane-2,2-di­yl)diphenol], C15H14N2O6, crystallizes with two mol­ecules in the asymmetric unit. Both have a trans conformation for their OH groups, and in each, the two aromatic rings are nearly orthogonal, with dihedral angles of 88.30 (3) and 89.62 (2)°. The nitro groups are nearly in the planes of their attached benzene rings, with C—C—N—O torsion angles in the range 1.21 (17)–4.06 (17)°, and they each accept an intra­molecular O—H⋯O hydrogen bond from their adjacent OH groups. One of the OH groups also forms a weak inter­molecular O—H⋯O hydrogen bond

    Mechanisms explaining transitions between tonic and phasic firing in neuronal populations as predicted by a low dimensional firing rate model

    Get PDF
    Several firing patterns experimentally observed in neural populations have been successfully correlated to animal behavior. Population bursting, hereby regarded as a period of high firing rate followed by a period of quiescence, is typically observed in groups of neurons during behavior. Biophysical membrane-potential models of single cell bursting involve at least three equations. Extending such models to study the collective behavior of neural populations involves thousands of equations and can be very expensive computationally. For this reason, low dimensional population models that capture biophysical aspects of networks are needed. \noindent The present paper uses a firing-rate model to study mechanisms that trigger and stop transitions between tonic and phasic population firing. These mechanisms are captured through a two-dimensional system, which can potentially be extended to include interactions between different areas of the nervous system with a small number of equations. The typical behavior of midbrain dopaminergic neurons in the rodent is used as an example to illustrate and interpret our results. \noindent The model presented here can be used as a building block to study interactions between networks of neurons. This theoretical approach may help contextualize and understand the factors involved in regulating burst firing in populations and how it may modulate distinct aspects of behavior.Comment: 25 pages (including references and appendices); 12 figures uploaded as separate file

    Robustness of Learning That Is Based on Covariance-Driven Synaptic Plasticity

    Get PDF
    It is widely believed that learning is due, at least in part, to long-lasting modifications of the strengths of synapses in the brain. Theoretical studies have shown that a family of synaptic plasticity rules, in which synaptic changes are driven by covariance, is particularly useful for many forms of learning, including associative memory, gradient estimation, and operant conditioning. Covariance-based plasticity is inherently sensitive. Even a slight mistuning of the parameters of a covariance-based plasticity rule is likely to result in substantial changes in synaptic efficacies. Therefore, the biological relevance of covariance-based plasticity models is questionable. Here, we study the effects of mistuning parameters of the plasticity rule in a decision making model in which synaptic plasticity is driven by the covariance of reward and neural activity. An exact covariance plasticity rule yields Herrnstein's matching law. We show that although the effect of slight mistuning of the plasticity rule on the synaptic efficacies is large, the behavioral effect is small. Thus, matching behavior is robust to mistuning of the parameters of the covariance-based plasticity rule. Furthermore, the mistuned covariance rule results in undermatching, which is consistent with experimentally observed behavior. These results substantiate the hypothesis that approximate covariance-based synaptic plasticity underlies operant conditioning. However, we show that the mistuning of the mean subtraction makes behavior sensitive to the mistuning of the properties of the decision making network. Thus, there is a tradeoff between the robustness of matching behavior to changes in the plasticity rule and its robustness to changes in the properties of the decision making network

    A discrete time neural network model with spiking neurons II. Dynamics with noise

    Full text link
    We provide rigorous and exact results characterizing the statistics of spike trains in a network of leaky integrate and fire neurons, where time is discrete and where neurons are submitted to noise, without restriction on the synaptic weights. We show the existence and uniqueness of an invariant measure of Gibbs type and discuss its properties. We also discuss Markovian approximations and relate them to the approaches currently used in computational neuroscience to analyse experimental spike trains statistics.Comment: 43 pages - revised version - to appear il Journal of Mathematical Biolog

    Tag-Trigger-Consolidation: A Model of Early and Late Long-Term-Potentiation and Depression

    Get PDF
    Changes in synaptic efficacies need to be long-lasting in order to serve as a substrate for memory. Experimentally, synaptic plasticity exhibits phases covering the induction of long-term potentiation and depression (LTP/LTD) during the early phase of synaptic plasticity, the setting of synaptic tags, a trigger process for protein synthesis, and a slow transition leading to synaptic consolidation during the late phase of synaptic plasticity. We present a mathematical model that describes these different phases of synaptic plasticity. The model explains a large body of experimental data on synaptic tagging and capture, cross-tagging, and the late phases of LTP and LTD. Moreover, the model accounts for the dependence of LTP and LTD induction on voltage and presynaptic stimulation frequency. The stabilization of potentiated synapses during the transition from early to late LTP occurs by protein synthesis dynamics that are shared by groups of synapses. The functional consequence of this shared process is that previously stabilized patterns of strong or weak synapses onto the same postsynaptic neuron are well protected against later changes induced by LTP/LTD protocols at individual synapses

    Phenomenological models of synaptic plasticity based on spike timing

    Get PDF
    Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations

    Noise Suppression and Surplus Synchrony by Coincidence Detection

    Get PDF
    The functional significance of correlations between action potentials of neurons is still a matter of vivid debates. In particular it is presently unclear how much synchrony is caused by afferent synchronized events and how much is intrinsic due to the connectivity structure of cortex. The available analytical approaches based on the diffusion approximation do not allow to model spike synchrony, preventing a thorough analysis. Here we theoretically investigate to what extent common synaptic afferents and synchronized inputs each contribute to closely time-locked spiking activity of pairs of neurons. We employ direct simulation and extend earlier analytical methods based on the diffusion approximation to pulse-coupling, allowing us to introduce precisely timed correlations in the spiking activity of the synaptic afferents. We investigate the transmission of correlated synaptic input currents by pairs of integrate-and-fire model neurons, so that the same input covariance can be realized by common inputs or by spiking synchrony. We identify two distinct regimes: In the limit of low correlation linear perturbation theory accurately determines the correlation transmission coefficient, which is typically smaller than unity, but increases sensitively even for weakly synchronous inputs. In the limit of high afferent correlation, in the presence of synchrony a qualitatively new picture arises. As the non-linear neuronal response becomes dominant, the output correlation becomes higher than the total correlation in the input. This transmission coefficient larger unity is a direct consequence of non-linear neural processing in the presence of noise, elucidating how synchrony-coded signals benefit from these generic properties present in cortical networks

    Burst-Time-Dependent Plasticity Robustly Guides ON/OFF Segregation in the Lateral Geniculate Nucleus

    Get PDF
    Spontaneous retinal activity (known as “waves”) remodels synaptic connectivity to the lateral geniculate nucleus (LGN) during development. Analysis of retinal waves recorded with multielectrode arrays in mouse suggested that a cue for the segregation of functionally distinct (ON and OFF) retinal ganglion cells (RGCs) in the LGN may be a desynchronization in their firing, where ON cells precede OFF cells by one second. Using the recorded retinal waves as input, with two different modeling approaches we explore timing-based plasticity rules for the evolution of synaptic weights to identify key features underlying ON/OFF segregation. First, we analytically derive a linear model for the evolution of ON and OFF weights, to understand how synaptic plasticity rules extract input firing properties to guide segregation. Second, we simulate postsynaptic activity with a nonlinear integrate-and-fire model to compare findings with the linear model. We find that spike-time-dependent plasticity, which modifies synaptic weights based on millisecond-long timing and order of pre- and postsynaptic spikes, fails to segregate ON and OFF retinal inputs in the absence of normalization. Implementing homeostatic mechanisms results in segregation, but only with carefully-tuned parameters. Furthermore, extending spike integration timescales to match the second-long input correlation timescales always leads to ON segregation because ON cells fire before OFF cells. We show that burst-time-dependent plasticity can robustly guide ON/OFF segregation in the LGN without normalization, by integrating pre- and postsynaptic bursts irrespective of their firing order and over second-long timescales. We predict that an LGN neuron will become ON- or OFF-responsive based on a local competition of the firing patterns of neighboring RGCs connecting to it. Finally, we demonstrate consistency with ON/OFF segregation in ferret, despite differences in the firing properties of retinal waves. Our model suggests that diverse input statistics of retinal waves can be robustly interpreted by a burst-based rule, which underlies retinogeniculate plasticity across different species

    Synergistic effects of oncolytic reovirus and docetaxel chemotherapy in prostate cancer

    Get PDF
    Reovirus type 3 Dearing (T3D) has demonstrated oncolytic activity in vitro, in in vivo murine models and in early clinical trials. However the true potential of oncolytic viruses may only be realized fully in combination with other modalities such as chemotherapy, targeted therapy and radiotherapy. In this study, we examine the oncolytic activity of reovirus T3D and chemotherapeutic agents against human prostate cancer cell lines, with particular focus on the highly metastatic cell line PC3 and the chemotherapeutic agent docetaxel. Docetaxel is the standard of care for metastatic prostate cancer and acts by disrupting the normal process of microtubule assembly and disassembly. Reoviruses have been shown to associate with microtubules and may require this association for efficient viral replication
    corecore