8,481 research outputs found

    Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks

    Full text link
    A long-term goal of AI is to produce agents that can learn a diversity of skills throughout their lifetimes and continuously improve those skills via experience. A longstanding obstacle towards that goal is catastrophic forgetting, which is when learning new information erases previously learned information. Catastrophic forgetting occurs in artificial neural networks (ANNs), which have fueled most recent advances in AI. A recent paper proposed that catastrophic forgetting in ANNs can be reduced by promoting modularity, which can limit forgetting by isolating task information to specific clusters of nodes and connections (functional modules). While the prior work did show that modular ANNs suffered less from catastrophic forgetting, it was not able to produce ANNs that possessed task-specific functional modules, thereby leaving the main theory regarding modularity and forgetting untested. We introduce diffusion-based neuromodulation, which simulates the release of diffusing, neuromodulatory chemicals within an ANN that can modulate (i.e. up or down regulate) learning in a spatial region. On the simple diagnostic problem from the prior work, diffusion-based neuromodulation 1) induces task-specific learning in groups of nodes and connections (task-specific localized learning), which 2) produces functional modules for each subtask, and 3) yields higher performance by eliminating catastrophic forgetting. Overall, our results suggest that diffusion-based neuromodulation promotes task-specific localized learning and functional modularity, which can help solve the challenging, but important problem of catastrophic forgetting

    Toward a dynamical systems analysis of neuromodulation

    Get PDF
    This work presents some first steps toward a more thorough understanding of the control systems employed in evolutionary robotics. In order to choose an appropriate architecture or to construct an effective novel control system we need insights into what makes control systems successful, robust, evolvable, etc. Here we present analysis intended to shed light on this type of question as it applies to a novel class of artificial neural networks that include a neuromodulatory mechanism: GasNets. We begin by instantiating a particular GasNet subcircuit responsible for tuneable pattern generation and thought to underpin the attractive property of “temporal adaptivity”. Rather than work within the GasNet formalism, we develop an extension of the well-known FitzHugh-Nagumo equations. The continuous nature of our model allows us to conduct a thorough dynamical systems analysis and to draw parallels between this subcircuit and beating/bursting phenomena reported in the neuroscience literature. We then proceed to explore the effects of different types of parameter modulation on the system dynamics. We conclude that while there are key differences between the gain modulation used in the GasNet and alternative schemes (including threshold modulation of more traditional synaptic input), both approaches are able to produce tuneable pattern generation. While it appears, at least in this study, that the GasNet’s gain modulation may not be crucial to pattern generation , we go on to suggest some possible advantages it could confer

    Presynaptic modulation as fast synaptic switching: state-dependent modulation of task performance

    Full text link
    Neuromodulatory receptors in presynaptic position have the ability to suppress synaptic transmission for seconds to minutes when fully engaged. This effectively alters the synaptic strength of a connection. Much work on neuromodulation has rested on the assumption that these effects are uniform at every neuron. However, there is considerable evidence to suggest that presynaptic regulation may be in effect synapse-specific. This would define a second "weight modulation" matrix, which reflects presynaptic receptor efficacy at a given site. Here we explore functional consequences of this hypothesis. By analyzing and comparing the weight matrices of networks trained on different aspects of a task, we identify the potential for a low complexity "modulation matrix", which allows to switch between differently trained subtasks while retaining general performance characteristics for the task. This means that a given network can adapt itself to different task demands by regulating its release of neuromodulators. Specifically, we suggest that (a) a network can provide optimized responses for related classification tasks without the need to train entirely separate networks and (b) a network can blend a "memory mode" which aims at reproducing memorized patterns and a "novelty mode" which aims to facilitate classification of new patterns. We relate this work to the known effects of neuromodulators on brain-state dependent processing.Comment: 6 pages, 13 figure

    Ageing, plasticity, and cognitive reserve in connectionist networks

    Get PDF
    Neurocomputational modeling has suggested that a range of mechanisms can lead to reductions in functional plasticity across development (Thomas & Johnson, 2006). In this paper, we consider whether ageing might also produce a reduction in plasticity. Marchman’s (1993) model of damage and recovery in past tense formation was extended to incorporate the two main proposals for implementing effects of ageing: altered neuromodulation and connection loss. Simulations showed that ageing did reduce plasticity (as assessed by the system’s ability to recover from damage) but that effects were modulated by (a) the mechanism used to implement ageing, (b) problem type, and (c) pre-existing levels of cognitive reserve

    Dopaminergic Regulation of Neuronal Circuits in Prefrontal Cortex

    Get PDF
    Neuromodulators, like dopamine, have considerable influence on the\ud processing capabilities of neural networks. \ud This has for instance been shown in the working memory functions\ud of prefrontal cortex, which may be regulated by altering the\ud dopamine level. Experimental work provides evidence on the biochemical\ud and electrophysiological actions of dopamine receptors, but there are few \ud theories concerning their significance for computational properties \ud (ServanPrintzCohen90,Hasselmo94).\ud We point to experimental data on neuromodulatory regulation of \ud temporal properties of excitatory neurons and depolarization of inhibitory \ud neurons, and suggest computational models employing these effects.\ud Changes in membrane potential may be modelled by the firing threshold,\ud and temporal properties by a parameterization of neuronal responsiveness \ud according to the preceding spike interval.\ud We apply these concepts to two examples using spiking neural networks.\ud In the first case, there is a change in the input synchronization of\ud neuronal groups, which leads to\ud changes in the formation of synchronized neuronal ensembles.\ud In the second case, the threshold\ud of interneurons influences lateral inhibition, and the switch from a \ud winner-take-all network to a parallel feedforward mode of processing.\ud Both concepts are interesting for the modeling of cognitive functions and may\ud have explanatory power for behavioral changes associated with dopamine \ud regulation

    State-Dependent and -Independent Effects of Dialyzing Excitatory Neuromodulator Receptor Antagonists into the Ventral Respiratory Column

    Get PDF
    Unilateral dialysis of the broad-spectrum muscarinic receptor antagonist atropine (50 mM) into the ventral respiratory column [(VRC) including the pre-Bötzinger complex region] of awake goats increased pulmonary ventilation (V̇i) and breathing frequency (f), conceivably due to local compensatory increases in serotonin (5-HT) and substance P (SP) measured in effluent mock cerebral spinal fluid (mCSF). In contrast, unilateral dialysis of a triple cocktail of antagonists to muscarinic (atropine; 5 mM), neurokinin-1, and 5-HT receptors does not alter V̇i or f, but increases local SP. Herein, we tested hypotheses that 1) local compensatory 5-HT and SP responses to 50 mM atropine dialyzed into the VRC of goats will not differ between anesthetized and awake states; and 2) bilateral dialysis of the triple cocktail of antagonists into the VRC of awake goats will not alter V̇i or f, but will increase local excitatory neuromodulators. Through microtubules implanted into the VRC of goats, probes were inserted to dialyze mCSF alone (time control), 50 mM atropine, or the triple cocktail of antagonists. We found 1) equivalent increases in local 5-HT and SP with 50 mM atropine dialysis during wakefulness compared with isoflurane anesthesia, but V̇i and f only increased while awake; and 2) dialyses of the triple cocktail of antagonists increased V̇i, f, 5-HT, and SP

    Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks

    Get PDF
    Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented
    • …
    corecore