1,138 research outputs found

    Homeostatic plasticity improves signal propagation in continuous time recurrent neural networks

    Get PDF
    Continuous-time recurrent neural networks (CTRNNs) are potentially an excellent substrate for the generation of adaptive behaviour in artificial autonomous agents. However, node saturation effects in these networks can leave them insensitive to input and stop signals from propagating. Node saturation is related to the problems of hyper-excitation and quiescence in biological nervous systems, which are thought to be avoided through the existence of homeostatic plastic mechanisms. Analogous mechanisms are here implemented in a variety of CTRNN architectures and are shown to increase node sensitivity and improve signal propagation, with implications for robotics. These results lend support to the view that homeostatic plasticity may prevent quiescence and hyper-excitation in biological nervous systems

    Sensitivity and stability: A signal propagation sweet spot in a sheet of recurrent centre crossing neurons

    No full text
    In this paper we demonstrate that signal propagation across a laminar sheet of recurrent neurons is maximised when two conditions are met. First, neurons must be in the so-called centre crossing configuration. Second, the network’s topology and weights must be such that the network comprises strongly coupled nodes, yet lies within the weakly coupled regime. We develop tools from linear stability analysis with which to describe this regime, and use them to examine the apparent tension between the sensitivity and instability of centre crossing networks

    Homeostatic plasticity and external input shape neural network dynamics

    Full text link
    In vitro and in vivo spiking activity clearly differ. Whereas networks in vitro develop strong bursts separated by periods of very little spiking activity, in vivo cortical networks show continuous activity. This is puzzling considering that both networks presumably share similar single-neuron dynamics and plasticity rules. We propose that the defining difference between in vitro and in vivo dynamics is the strength of external input. In vitro, networks are virtually isolated, whereas in vivo every brain area receives continuous input. We analyze a model of spiking neurons in which the input strength, mediated by spike rate homeostasis, determines the characteristics of the dynamical state. In more detail, our analytical and numerical results on various network topologies show consistently that under increasing input, homeostatic plasticity generates distinct dynamic states, from bursting, to close-to-critical, reverberating and irregular states. This implies that the dynamic state of a neural network is not fixed but can readily adapt to the input strengths. Indeed, our results match experimental spike recordings in vitro and in vivo: the in vitro bursting behavior is consistent with a state generated by very low network input (< 0.1%), whereas in vivo activity suggests that on the order of 1% recorded spikes are input-driven, resulting in reverberating dynamics. Importantly, this predicts that one can abolish the ubiquitous bursts of in vitro preparations, and instead impose dynamics comparable to in vivo activity by exposing the system to weak long-term stimulation, thereby opening new paths to establish an in vivo-like assay in vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.

    Improving equilibrium propagation without weight symmetry through Jacobian homeostasis

    Full text link
    Equilibrium propagation (EP) is a compelling alternative to the backpropagation of error algorithm (BP) for computing gradients of neural networks on biological or analog neuromorphic substrates. Still, the algorithm requires weight symmetry and infinitesimal equilibrium perturbations, i.e., nudges, to estimate unbiased gradients efficiently. Both requirements are challenging to implement in physical systems. Yet, whether and how weight asymmetry affects its applicability is unknown because, in practice, it may be masked by biases introduced through the finite nudge. To address this question, we study generalized EP, which can be formulated without weight symmetry, and analytically isolate the two sources of bias. For complex-differentiable non-symmetric networks, we show that the finite nudge does not pose a problem, as exact derivatives can still be estimated via a Cauchy integral. In contrast, weight asymmetry introduces bias resulting in low task performance due to poor alignment of EP's neuronal error vectors compared to BP. To mitigate this issue, we present a new homeostatic objective that directly penalizes functional asymmetries of the Jacobian at the network's fixed point. This homeostatic objective dramatically improves the network's ability to solve complex tasks such as ImageNet 32x32. Our results lay the theoretical groundwork for studying and mitigating the adverse effects of imperfections of physical networks on learning algorithms that rely on the substrate's relaxation dynamics

    Reinforcement learning in populations of spiking neurons

    Get PDF
    Population coding is widely regarded as a key mechanism for achieving reliable behavioral responses in the face of neuronal variability. But in standard reinforcement learning a flip-side becomes apparent. Learning slows down with increasing population size since the global reinforcement becomes less and less related to the performance of any single neuron. We show that, in contrast, learning speeds up with increasing population size if feedback about the populationresponse modulates synaptic plasticity in addition to global reinforcement. The two feedback signals (reinforcement and population-response signal) can be encoded by ambient neurotransmitter concentrations which vary slowly, yielding a fully online plasticity rule where the learning of a stimulus is interleaved with the processing of the subsequent one. The assumption of a single additional feedback mechanism therefore reconciles biological plausibility with efficient learning

    Homeostatic Activity-Dependent Tuning of Recurrent Networks for Robust Propagation of Activity.

    Get PDF
    UNLABELLED: Developing neuronal networks display spontaneous bursts of action potentials that are necessary for circuit organization and tuning. While spontaneous activity has been shown to instruct map formation in sensory circuits, it is unknown whether it plays a role in the organization of motor networks that produce rhythmic output. Using computational modeling, we investigate how recurrent networks of excitatory and inhibitory neuronal populations assemble to produce robust patterns of unidirectional and precisely timed propagating activity during organism locomotion. One example is provided by the motor network inDrosophilalarvae, which generates propagating peristaltic waves of muscle contractions during crawling. We examine two activity-dependent models, which tune weak network connectivity based on spontaneous activity patterns: a Hebbian model, where coincident activity in neighboring populations strengthens connections between them; and a homeostatic model, where connections are homeostatically regulated to maintain a constant level of excitatory activity based on spontaneous input. The homeostatic model successfully tunes network connectivity to generate robust activity patterns with appropriate timing relationships between neighboring populations. These timing relationships can be modulated by the properties of spontaneous activity, suggesting its instructive role for generating functional variability in network output. In contrast, the Hebbian model fails to produce the tight timing relationships between neighboring populations required for unidirectional activity propagation, even when additional assumptions are imposed to constrain synaptic growth. These results argue that homeostatic mechanisms are more likely than Hebbian mechanisms to tune weak connectivity based on spontaneous input in a recurrent network for rhythm generation and robust activity propagation. SIGNIFICANCE STATEMENT: How are neural circuits organized and tuned to maintain stable function and produce robust output? This task is especially difficult during development, when circuit properties change in response to variable environments and internal states. Many developing circuits exhibit spontaneous activity, but its role in the synaptic organization of motor networks that produce rhythmic output is unknown. We studied a model motor network, that when appropriately tuned, generates propagating activity as during crawling inDrosophilalarvae. Based on experimental evidence of activity-dependent tuning of connectivity, we examined plausible mechanisms by which appropriate connectivity emerges. Our results suggest that activity-dependent homeostatic mechanisms are better suited than Hebbian mechanisms for organizing motor network connectivity, and highlight an important difference from sensory areas.This work was supported by Cambridge Overseas Research Fund, Trinity College, and Swartz Foundation to J.G. and Wellcome Trust VIP funding to J.F.E. through Program Grant WT075934 to Michael Bate and Matthias Landgraf. J.G. is also supported by Burroughs-Wellcome Fund Career Award at the Scientific Interface.This is the final version of the article. It first appeared from the Society for Neuroscience via https://doi.org/10.1523/JNEUROSCI.2511-15.201

    Homeostatic plasticity for single node delay-coupled reservoir computing

    Get PDF
    © 2015 Massachusetts Institute of Technology. Supplementing a differential equation with delays results in an infinitedimensional dynamical system. This property provides the basis for a reservoir computing architecture, where the recurrent neural network is replaced by a single nonlinear node, delay-coupled to itself. Instead of the spatial topology of a network, subunits in the delay-coupled reservoir are multiplexed in time along one delay span of the system. The computational power of the reservoir is contingent on this temporal multiplexing. Here, we learn optimal temporal multiplexing by means of a biologically inspired homeostatic plasticity mechanism. Plasticity acts locally and changes the distances between the subunits along the delay, depending on how responsive these subunits are to the input. After analytically deriving the learning mechanism, we illustrate its role in improving the reservoir's computational power. To this end, we investigate, first, the increase of the reservoir's memory capacity. Second, we predict a NARMA-10 time series, showing that plasticity reduces the normalized root-mean-square error by more than 20%. Third, we discuss plasticity's influence on the reservoir's input-information capacity, the coupling strength between subunits, and the distribution of the readout coefficients

    Bidirectional Learning in Recurrent Neural Networks Using Equilibrium Propagation

    Get PDF
    Neurobiologically-plausible learning algorithms for recurrent neural networks that can perform supervised learning are a neglected area of study. Equilibrium propagation is a recent synthesis of several ideas in biological and artificial neural network research that uses a continuous-time, energy-based neural model with a local learning rule. However, despite dealing with recurrent networks, equilibrium propagation has only been applied to discriminative categorization tasks. This thesis generalizes equilibrium propagation to bidirectional learning with asymmetric weights. Simultaneously learning the discriminative as well as generative transformations for a set of data points and their corresponding category labels, bidirectional equilibrium propagation utilizes recurrence and weight asymmetry to share related but non-identical representations within the network. Experiments on an artificial dataset demonstrate the ability to learn both transformations, as well as the ability for asymmetric-weight networks to generalize their discriminative training to the untrained generative task

    Sparse Coding with a Somato-Dendritic Rule

    Get PDF
    © 2020 Elsevier Ltd. All rights reserved. This manuscript is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Licence http://creativecommons.org/licenses/by-nc-nd/4.0/.Cortical neurons are silent most of the time. This sparse activity is energy efficient, and the resulting neural code has favourable properties for associative learning. Most neural models of sparse coding use some form of homeostasis to ensure that each neuron fires infrequently. But homeostatic plasticity acting on a fast timescale may not be biologically plausible, and could lead to catastrophic forgetting in embodied agents that learn continuously. We set out to explore whether inhibitory plasticity could play that role instead, regulating both the population sparseness and the average firing rates. We put the idea to the test in a hybrid network where rate-based dendritic compartments integrate the feedforward input, while spiking somas compete through recurrent inhibition. A somato-dendritic learning rule allows somatic inhibition to modulate nonlinear Hebbian learning in the dendrites. Trained on MNIST digits and natural images, the network discovers independent components that form a sparse encoding of the input and support linear decoding. These findings con-firm that intrinsic plasticity is not strictly required for regulating sparseness: inhibitory plasticity can have the same effect, although that mechanism comes with its own stability-plasticity dilemma. Going beyond point neuron models, the network illustrates how a learning rule can make use of dendrites and compartmentalised inputs; it also suggests a functional interpretation for clustered somatic inhibition in cortical neurons.Peer reviewe

    Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems

    Full text link
    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.Comment: (Under review
    corecore