543 research outputs found

    ψ-type stability of reaction–diffusion neural networks with time-varying discrete delays and bounded distributed delays

    Get PDF
    In this paper, the ψ-type stability and robust ψ-type stability for reaction–diffusion neural networks (RDNNs) with Dirichlet boundary conditions, time-varying discrete delays and bounded distributed delays are investigated, respectively. Firstly, we analyze the ψ-type stability and robust ψ-type stability of RDNNs with time-varying discrete delays by means of ψ-type functions combined with some inequality techniques, and put forward several ψ-type stability criteria for the considered networks. Additionally, the models of RDNNs with bounded distributed delays are established and some sufficient conditions to guarantee the ψ-type stability and robust ψ-type stability are given. Lastly, two examples are provided to confirm the effectiveness of the derived results

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case

    A neural network model of adaptively timed reinforcement learning and hippocampal dynamics

    Full text link
    A neural model is described of how adaptively timed reinforcement learning occurs. The adaptive timing circuit is suggested to exist in the hippocampus, and to involve convergence of dentate granule cells on CA3 pyramidal cells, and NMDA receptors. This circuit forms part of a model neural system for the coordinated control of recognition learning, reinforcement learning, and motor learning, whose properties clarify how an animal can learn to acquire a delayed reward. Behavioral and neural data are summarized in support of each processing stage of the system. The relevant anatomical sites are in thalamus, neocortex, hippocampus, hypothalamus, amygdala, and cerebellum. Cerebellar influences on motor learning are distinguished from hippocampal influences on adaptive timing of reinforcement learning. The model simulates how damage to the hippocampal formation disrupts adaptive timing, eliminates attentional blocking, and causes symptoms of medial temporal amnesia. It suggests how normal acquisition of subcortical emotional conditioning can occur after cortical ablation, even though extinction of emotional conditioning is retarded by cortical ablation. The model simulates how increasing the duration of an unconditioned stimulus increases the amplitude of emotional conditioning, but does not change adaptive timing; and how an increase in the intensity of a conditioned stimulus "speeds up the clock", but an increase in the intensity of an unconditioned stimulus does not. Computer simulations of the model fit parametric conditioning data, including a Weber law property and an inverted U property. Both primary and secondary adaptively timed conditioning are simulated, as are data concerning conditioning using multiple interstimulus intervals (ISIs), gradually or abruptly changing ISis, partial reinforcement, and multiple stimuli that lead to time-averaging of responses. Neurobiologically testable predictions are made to facilitate further tests of the model.Air Force Office of Scientific Research (90-0175, 90-0128); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI-87-16960); Office of Naval Research (N00014-91-J-4100

    Biologically inspired evolutionary temporal neural circuits

    Get PDF
    Biological neural networks have always motivated creation of new artificial neural networks, and in this case a new autonomous temporal neural network system. Among the more challenging problems of temporal neural networks are the design and incorporation of short and long-term memories as well as the choice of network topology and training mechanism. In general, delayed copies of network signals can form short-term memory (STM), providing a limited temporal history of events similar to FIR filters, whereas the synaptic connection strengths as well as delayed feedback loops (ER circuits) can constitute longer-term memories (LTM). This dissertation introduces a new general evolutionary temporal neural network framework (GETnet) through automatic design of arbitrary neural networks with STM and LTM. GETnet is a step towards realization of general intelligent systems that need minimum or no human intervention and can be applied to a broad range of problems. GETnet utilizes nonlinear moving average/autoregressive nodes and sub-circuits that are trained by enhanced gradient descent and evolutionary search in terms of architecture, synaptic delay, and synaptic weight spaces. The mixture of Lamarckian and Darwinian evolutionary mechanisms facilitates the Baldwin effect and speeds up the hybrid training. The ability to evolve arbitrary adaptive time-delay connections enables GETnet to find novel answers to many classification and system identification tasks expressed in the general form of desired multidimensional input and output signals. Simulations using Mackey-Glass chaotic time series and fingerprint perspiration-induced temporal variations are given to demonstrate the above stated capabilities of GETnet

    Passivity Analysis of Markovian Jumping Neural Networks with Leakage Time-Varying Delays

    Get PDF

    Hardware Learning in Analogue VLSI Neural Networks

    Get PDF

    A neural network model of normal and abnormal learning and memory consolidation

    Get PDF
    The amygdala and hippocampus interact with thalamocortical systems to regulate cognitive-emotional learning, and lesions of amygdala, hippocampus, thalamus, and cortex have different effects depending on the phase of learning when they occur. In examining eyeblink conditioning data, several questions arise: Why is the hippocampus needed for trace conditioning where there is a temporal gap between the conditioned stimulus offset and the onset of the unconditioned stimulus, but not needed for delay conditioning where stimuli temporally overlap and co-terminate? Why do amygdala lesions made before or immediately after training decelerate conditioning while those made later have no impact on conditioned behavior? Why do thalamic lesions degrade trace conditioning more than delay conditioning? Why do hippocampal lesions degrade recent learning but not temporally remote learning? Why do cortical lesions degrade temporally remote learning, and cause amnesia, but not recent or post-lesion learning? How is temporally graded amnesia caused by ablation of medial prefrontal cortex? How are mechanisms of motivated attention and the emergent state of consciousness linked during conditioning? How do neurotrophins, notably Brain Derived Neurotrophic Factor (BDNF), influence memory formation and consolidation? A neural model, called neurotrophic START, or nSTART, proposes answers to these questions. The nSTART model synthesizes and extends key principles, mechanisms, and properties of three previously published brain models of normal behavior. These three models describe aspects of how the brain can learn to categorize objects and events in the world; how the brain can learn the emotional meanings of such events, notably rewarding and punishing events, through cognitive-emotional interactions; and how the brain can learn to adaptively time attention paid to motivationally important events, and when to respond to these events, in a context-appropriate manner. The model clarifies how hippocampal adaptive timing mechanisms and BDNF may bridge the gap between stimuli during trace conditioning and thereby allow thalamocortical and corticocortical learning to take place and be consolidated. The simulated data arise as emergent properties of several brain regions interacting together. The model overcomes problems of alternative memory models, notably models wherein memories that are initially stored in hippocampus move to the neocortex during consolidation

    Contributions of synaptic filters to models of synaptically stored memory

    No full text
    The question of how neural systems encode memories in one-shot without immediately disrupting previously stored information has puzzled theoretical neuroscientists for years and it is the central topic of this thesis. Previous attempts on this topic, have proposed that synapses probabilistically update in response to plasticity inducing stimuli to effectively delay the degradation of old memories in the face of ongoing memory storage. Indeed, experiments have shown that synapses do not immediately respond to plasticity inducing stimuli, since these must be presented many times before synaptic plasticity is expressed. Such a delay could be due to the stochastic nature of synaptic plasticity or perhaps because induction signals are integrated before overt strength changes occur.The later approach has been previously applied to control fluctuations in neural development by low-pass filtering induction signals before plasticity is expressed. In this thesis we consider memory dynamics in a mathematical model with synapses that integrate plasticity induction signals to a threshold before expressing plasticity. We report novel recall dynamics and considerable improvements in memory lifetimes against a prominent model of synaptically stored memory. With integrating synapses the memory trace initially rises before reaching a maximum and then falls. The memory signal dissociates into separate oblivescence and reminiscence components, with reminiscence initially dominating recall. Furthermore, we find that integrating synapses possess natural timescales that can be used to consider the transition to late-phase plasticity under spaced repetition patterns known to lead to optimal storage conditions. We find that threshold crossing statistics differentiate between massed and spaced memory repetition patterns. However, isolated integrative synapses obtain an insufficient statistical sample to detect the stimulation pattern within a few memory repetitions. We extend the modelto consider the cooperation of well-known intracellular signalling pathways in detecting storage conditions by utilizing the profile of postsynaptic depolarization. We find that neuron wide signalling and local synaptic signals can be combined to detect optimal storage conditions that lead to stable forms of plasticity in a synapse specific manner.These models can be further extended to consider heterosynaptic and neuromodulatory interactions for late-phase plasticity.<br/
    • …
    corecore