1,542 research outputs found

    Input-driven unsupervised learning in recurrent neural networks

    Get PDF
    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is an attractor neural network with Hebbian learning (e.g. the Hopfield model). The model simplicity and the locality of the synaptic update rules come at the cost of a limited storage capacity, compared with the capacity achieved with supervised learning algorithms, whose biological plausibility is questionable. Here, we present an on-line learning rule for a recurrent neural network that achieves near-optimal performance without an explicit supervisory error signal and using only locally accessible information, and which is therefore biologically plausible. The fully connected network consists of excitatory units with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the patterns to be memorized are presented on-line as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs ('local fields'). Synapses corresponding to active inputs are modified as a function of the position of the local field with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. An additional parameter of the model allows to trade storage capacity for robustness, i.e. increased size of the basins of attraction. We simulated a network of 1001 excitatory neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction: our results show that, for any given basin size, our network more than doubles the storage capacity, compared with a standard Hopfield network. Our learning rule is consistent with available experimental data documenting how plasticity depends on firing rate. It predicts that at high enough firing rates, no potentiation should occu

    Irregular Persistent Activity Induced by Synaptic Excitatory Feedback

    Get PDF
    Neurophysiological experiments on monkeys have reported highly irregular persistent activity during the performance of an oculomotor delayed-response task. These experiments show that during the delay period the coefficient of variation (CV) of interspike intervals (ISI) of prefrontal neurons is above 1, on average, and larger than during the fixation period. In the present paper, we show that this feature can be reproduced in a network in which persistent activity is induced by excitatory feedback, provided that (i) the post-spike reset is close enough to threshold , (ii) synaptic efficacies are a non-linear function of the pre-synaptic firing rate. Non-linearity between pre-synaptic rate and effective synaptic strength is implemented by a standard short-term depression mechanism (STD). First, we consider the simplest possible network with excitatory feedback: a fully connected homogeneous network of excitatory leaky integrate-and-fire neurons, using both numerical simulations and analytical techniques. The results are then confirmed in a network with selective excitatory neurons and inhibition. In both the cases there is a large range of values of the synaptic efficacies for which the statistics of firing of single cells is similar to experimental data

    Multiple forms of working memory emerge from synapse-astrocyte interactions in a neuron-glia network model

    Get PDF
    Persistent activity in populations of neurons, time-varying activity across a neural population, or activity-silent mechanisms carried out by hidden internal states of the neural population have been proposed as different mechanisms of working memory (WM). Whether these mechanisms could be mutually exclusive or occur in the same neuronal circuit remains, however, elusive, and so do their biophysical underpinnings. While WM is traditionally regarded to depend purely on neuronal mechanisms, cortical networks also include astrocytes that can modulate neural activity. We propose and investigate a network model that includes both neurons and glia and show that glia-synapse interactions can lead to multiple stable states of synaptic transmission. Depending on parameters, these interactions can lead in turn to distinct patterns of network activity that can serve as substrates for WM

    Supervised Associative Learning in Spiking Neural Network

    Get PDF
    In this paper, we propose a simple supervised associative learning approach for spiking neural networks. In an excitatory-inhibitory network paradigm with Izhikevich spiking neurons, synaptic plasticity is implemented on excitatory to excitatory synapses dependent on both spike emission rates and spike timings. As results of learning, the network is able to associate not just familiar stimuli but also novel stimuli observed through synchronised activity within the same subpopulation and between two associated subpopulations

    Astrocytes: Orchestrating synaptic plasticity?

    Get PDF
    Synaptic plasticity is the capacity of a preexisting connection between two neurons to change in strength as a function of neural activity. Because synaptic plasticity is the major candidate mechanism for learning and memory, the elucidation of its constituting mechanisms is of crucial importance in many aspects of normal and pathological brain function. In particular, a prominent aspect that remains debated is how the plasticity mechanisms, that encompass a broad spectrum of temporal and spatial scales, come to play together in a concerted fashion. Here we review and discuss evidence that pinpoints to a possible non-neuronal, glial candidate for such orchestration: the regulation of synaptic plasticity by astrocytes

    Physiological serum 25-hydroxyvitamin D concentrations are associated with improved thyroid function—observations from a community-based program

    Get PDF
    Purpose: Vitamin D deficiency has been associated with an increased risk of hypothyroidism and autoimmune thyroid disease. Our aim was to investigate the influence of vitamin D supplementation on thyroid function and anti-thyroid antibody levels. Methods: We constructed a database that included 11,017 participants in a health and wellness program that provided vitamin D supplementation to target physiological serum 25-hydroxyvitmain D [25(OH)D] concentrations (>100 nmol/L). Participant measures were compared between entry to the program (baseline) and follow-up (12 ± 3 months later) using an intent-to-treat analysis. Further, a nested case-control design was utilized to examine differences in thyroid function over 1 year in hypothyroid individuals and euthyroid controls. Results: More than 72% of participants achieved serum 25(OH)D concentrations >100 nmol/L at follow-up, with 20% above 125 nmol/L. Hypothyroidism was detected in 2% (23% including subclinical hypothyroidism) of participants at baseline and 0.4% (or 6% with subclinical) at follow-up. Serum 25(OH)D concentrations ≥125 nmol/L were associated with a 30% reduced risk of hypothyroidism and a 32% reduced risk of elevated anti-thyroid antibodies. Hypothyroid cases were found to have higher mean serum 25(OH)D concentrations at follow-up, which was a significant positive predictor of improved thyroid function. Conclusion: The results of the current study suggest that optimal thyroid function might require serum 25(OH)D concentrations above 125 nmol/L. Vitamin D supplementation may offer a safe and economical approach to improve thyroid function and may provide protection from developing thyroid disease

    Input-driven unsupervised learning in recurrent neural networks

    Get PDF
    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is an attractor neural network with Hebbian learning (e.g. the Hopfield model). The model simplicity and the locality of the synaptic update rules come at the cost of a limited storage capacity, compared with the capacity achieved with supervised learning algorithms, whose biological plausibility is questionable. Here, we present an on-line learning rule for a recurrent neural network that achieves near-optimal performance without an explicit supervisory error signal and using only locally accessible information, and which is therefore biologically plausible. The fully connected network consists of excitatory units with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the patterns to be memorized are presented on-line as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs (’local fields’). Synapses corresponding to active inputs are modified as a function of the position of the local field with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. An additional parameter of the model allows to trade storage capacity for robustness, i.e. increased size of the basins of attraction. We simulated a network of 1001 excitatory neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction: our results show that, for any given basin size, our network more than doubles the storage capacity, compared with a standard Hopfield network. Our learning rule is consistent with available experimental data documenting how plasticity depends on firing rate. It predicts that at high enough firing rates, no potentiation should occur

    The electrum analyzer: Model checking relational first-order temporal specifications

    Get PDF
    This paper presents the Electrum Analyzer, a free-software tool to validate and perform model checking of Electrum specifications. Electrum is an extension of Alloy that enriches its relational logic with LTL operators, thus simplifying the specification of dynamic systems. The Analyzer supports both automatic bounded model checking, with an encoding into SAT, and unbounded model checking, with an encoding into SMV. Instance, or counter-example, traces are presented back to the user in a unified visualizer. Features to speed up model checking are offered, including a decomposed parallel solving strategy and the extraction of symbolic bounds. Source code: https://github.com/haslab/ElectrumVideo: https://youtu.be/FbjlpvjgMDA.European Regional Development Fund (ERDF) through the Operational Programme for Competitiveness and Internationalisation (COMPETE2020) and by National Funds through the Portuguese funding agency, Fundação para a Ciência e a Tecnologia (FCT) within project POCI-01-0145-FEDER-016826, and the French Research Agency project FORMEDICIS ANR-16-CE25-000

    Extracting non-linear integrate-and-fire models from experimental data using dynamic I–V curves

    Get PDF
    The dynamic I–V curve method was recently introduced for the efficient experimental generation of reduced neuron models. The method extracts the response properties of a neuron while it is subject to a naturalistic stimulus that mimics in vivo-like fluctuating synaptic drive. The resulting history-dependent, transmembrane current is then projected onto a one-dimensional current–voltage relation that provides the basis for a tractable non-linear integrate-and-fire model. An attractive feature of the method is that it can be used in spike-triggered mode to quantify the distinct patterns of post-spike refractoriness seen in different classes of cortical neuron. The method is first illustrated using a conductance-based model and is then applied experimentally to generate reduced models of cortical layer-5 pyramidal cells and interneurons, in injected-current and injected- conductance protocols. The resulting low-dimensional neuron models—of the refractory exponential integrate-and-fire type—provide highly accurate predictions for spike-times. The method therefore provides a useful tool for the construction of tractable models and rapid experimental classification of cortical neurons
    corecore