245 research outputs found

    Timing and Causality in the Generation of Learned Eyelid Responses

    Get PDF
    The cerebellum-red nucleus-facial motoneuron (Mn) pathway has been reported as being involved in the proper timing of classically conditioned eyelid responses. This special type of associative learning serves as a model of event timing for studying the role of the cerebellum in dynamic motor control. Here, we have re-analyzed the firing activities of cerebellar posterior interpositus (IP) neurons and orbicularis oculi (OO) Mns in alert behaving cats during classical eyeblink conditioning, using a delay paradigm. The aim was to revisit the hypothesis that the IP neurons (IPns) can be considered a neuronal phase-modulating device supporting OO Mns firing with an emergent timing mechanism and an explicit correlation code during learned eyelid movements. Optimized experimental and computational tools allowed us to determine the different causal relationships (temporal order and correlation code) during and between trials. These intra- and inter-trial timing strategies expanding from sub-second range (millisecond timing) to longer-lasting ranges (interval timing) expanded the functional domain of cerebellar timing beyond motor control. Interestingly, the results supported the above-mentioned hypothesis. The causal inferences were influenced by the precise motor and pre-motor spike timing in the cause-effect interval, and, in addition, the timing of the learned responses depended on cerebellar–Mn network causality. Furthermore, the timing of CRs depended upon the probability of simulated causal conditions in the cause-effect interval and not the mere duration of the inter-stimulus interval. In this work, the close relation between timing and causality was verified. It could thus be concluded that the firing activities of IPns may be related more to the proper performance of ongoing CRs (i.e., the proper timing as a consequence of the pertinent causality) than to their generation and/or initiation

    Dopamine and the Temporal Dependence of Learning and Memory

    Get PDF
    Animal behavior is largely influenced by the seeking out of rewards and avoidance of punishments. Positive or negative reinforcements, like a food reward or painful shock, impart meaningful valence onto sensory cues in the animal’s environment. The ability of animals to form associations between a sensory cue and a rewarding or punishing reinforcement permits them to adapt their future behavior to maximize reward and minimize punishments. Animals rely on the timing of events to infer the causal relationships between cues and outcomes –– sensory cues that precede a painful shock in time become associated with its onset and are imparted with negative valence, whereas cues that follow the shock in time are instead associated with its cessation and imparted with positive valence. While the temporal requirements for associative learning have been well characterized at the behavioral level, the molecular and circuit mechanisms for this temporal sensitivity remain incompletely understood. Using the simple architecture of the mushroom body, an olfactory associative learning center in Drosophila, I examined how the relative timing of olfactory inputs and dopaminergic reinforcement signals is encoded at the molecular, synaptic, and circuit level to give rise to learned odor associations. I show that in Drosophila, opposing olfactory associations can be formed and updated on a trial-by-trial basis depending on the temporal relationship between an odor cue and dopaminergic reinforcement during conditioning. Additionally, both negative and positive reinforcements equivalently instruct appetitive and aversive olfactory associations –– odors preceding a negative reinforcement or following a rewarding reinforcement acquire an aversive valence, while odors instead following a negative reinforcement or preceding a rewarding reinforcement become attractive. Furthermore, functional imaging revealed that synapses within the mushroom body are bidirectionally modulated depending on the temporal ordering of odor and dopaminergic reinforcement, leading to synaptic depression when an odor precedes dopaminergic activity or synaptic facilitation when dopaminergic activity instead precedes an odor. Through the synchronous recording of neural activity and behavior, I found that the bidirectional regulation of synaptic transmission within the mushroom body directly correlates with the emergence of learned olfactory behaviors. This temporal sensitivity arises from two dopamine receptors, DopR1 and DopR2, that couple to distinct second-messengers and direct either synaptic depression or potentiation. Loss of either receptor renders the synapses of the mushroom body capable of only unidirectional plasticity and prevents the behavioral flexibility of writing opposing associations depending on the temporal structure of conditioning. Together, these results reveal how the distinct intracellular signaling pathways of two dopamine receptors can detect the order of events within an associative learning circuit to instruct opposing forms of synaptic and behavioral plasticity, providing a mechanism for animals to use both the onset and offset of a reinforcement signal to instruct distinct associations. Additionally, this bidirectional modulation allows animals to flexibly update olfactory associations on a trial-bytrial basis when temporal relationships are altered, permitting them to contend with a complex and changing sensory world

    Recurrent Neural Network with Human Simulator Based Virtual Reality

    Get PDF

    Motion representation with spiking neural networks for grasping and manipulation

    Get PDF
    Die Natur bedient sich Millionen von Jahren der Evolution, um adaptive physikalische Systeme mit effizienten Steuerungsstrategien zu erzeugen. Im Gegensatz zur konventionellen Robotik plant der Mensch nicht einfach eine Bewegung und fĂŒhrt sie aus, sondern es gibt eine Kombination aus mehreren Regelkreisen, die zusammenarbeiten, um den Arm zu bewegen und ein Objekt mit der Hand zu greifen. Mit der Forschung an humanoiden und biologisch inspirierten Robotern werden komplexe kinematische Strukturen und komplizierte Aktor- und Sensorsysteme entwickelt. Diese Systeme sind schwierig zu steuern und zu programmieren, und die klassischen Methoden der Robotik können deren StĂ€rken nicht immer optimal ausnutzen. Die neurowissenschaftliche Forschung hat große Fortschritte beim VerstĂ€ndnis der verschiedenen Gehirnregionen und ihrer entsprechenden Funktionen gemacht. Dennoch basieren die meisten Modelle auf groß angelegten Simulationen, die sich auf die Reproduktion der KonnektivitĂ€t und der statistischen neuronalen AktivitĂ€t konzentrieren. Dies öffnet eine LĂŒcke bei der Anwendung verschiedener Paradigmen, um Gehirnmechanismen und Lernprinzipien zu validieren und Funktionsmodelle zur Steuerung von Robotern zu entwickeln. Ein vielversprechendes Paradigma ist die ereignis-basierte Berechnung mit SNNs. SNNs fokussieren sich auf die biologischen Aspekte von Neuronen und replizieren deren Arbeitsweise. Sie sind fĂŒr spike- basierte Kommunikation ausgelegt und ermöglichen die Erforschung von Mechanismen des Gehirns fĂŒr das Lernen mittels neuronaler PlastizitĂ€t. Spike-basierte Kommunikation nutzt hoch parallelisierten Hardware-Optimierungen mittels neuromorpher Chips, die einen geringen Energieverbrauch und schnelle lokale Operationen ermöglichen. In dieser Arbeit werden verschiedene SNNs zur DurchfĂŒhrung von Bewegungss- teuerung fĂŒr Manipulations- und Greifaufgaben mit einem Roboterarm und einer anthropomorphen Hand vorgestellt. Diese basieren auf biologisch inspirierten funktionalen Modellen des menschlichen Gehirns. Ein Motor-Primitiv wird auf parametrische Weise mit einem Aktivierungsparameter und einer Abbildungsfunktion auf die Roboterkinematik ĂŒbertragen. Die Topologie des SNNs spiegelt die kinematische Struktur des Roboters wider. Die Steuerung des Roboters erfolgt ĂŒber das Joint Position Interface. Um komplexe Bewegungen und Verhaltensweisen modellieren zu können, werden die Primitive in verschiedenen Schichten einer Hierarchie angeordnet. Dies ermöglicht die Kombination und Parametrisierung der Primitiven und die Wiederverwendung von einfachen Primitiven fĂŒr verschiedene Bewegungen. Es gibt verschiedene Aktivierungsmechanismen fĂŒr den Parameter, der ein Motorprimitiv steuert — willkĂŒrliche, rhythmische und reflexartige. Außerdem bestehen verschiedene Möglichkeiten neue Motorprimitive entweder online oder offline zu lernen. Die Bewegung kann entweder als Funktion modelliert oder durch Imitation der menschlichen AusfĂŒhrung gelernt werden. Die SNNs können in andere Steuerungssysteme integriert oder mit anderen SNNs kombiniert werden. Die Berechnung der inversen Kinematik oder die Validierung von Konfigurationen fĂŒr die Planung ist nicht erforderlich, da der Motorprimitivraum nur durchfĂŒhrbare Bewegungen hat und keine ungĂŒltigen Konfigurationen enthĂ€lt. FĂŒr die Evaluierung wurden folgende Szenarien betrachtet, das Zeigen auf verschiedene Ziele, das Verfolgen einer Trajektorie, das AusfĂŒhren von rhythmischen oder sich wiederholenden Bewegungen, das AusfĂŒhren von Reflexen und das Greifen von einfachen Objekten. ZusĂ€tzlich werden die Modelle des Arms und der Hand kombiniert und erweitert, um die mehrbeinige Fortbewegung als Anwendungsfall der Steuerungsarchitektur mit Motorprimitiven zu modellieren. Als Anwendungen fĂŒr einen Arm (3 DoFs) wurden die Erzeugung von Zeigebewegungen und das perzeptionsgetriebene Erreichen von Zielen modelliert. Zur Erzeugung von Zeigebewegun- gen wurde ein Basisprimitiv, das auf den Mittelpunkt einer Ebene zeigt, offline mit vier Korrekturprimitiven kombiniert, die eine neue Trajektorie erzeugen. FĂŒr das wahrnehmungsgesteuerte Erreichen eines Ziels werden drei Primitive online kombiniert unter Verwendung eines Zielsignals. Als Anwendungen fĂŒr eine FĂŒnf-Finger-Hand (9 DoFs) wurden individuelle Finger-aktivierungen und Soft-Grasping mit nachgiebiger Steuerung modelliert. Die Greif- bewegungen werden mit Motor-Primitiven in einer Hierarchie modelliert, wobei die Finger-Primitive die Synergien zwischen den Gelenken und die Hand-Primitive die unterschiedlichen Affordanzen zur Koordination der Finger darstellen. FĂŒr jeden Finger werden zwei Reflexe hinzugefĂŒgt, zum Aktivieren oder Stoppen der Bewegung bei Kontakt und zum Aktivieren der nachgiebigen Steuerung. Dieser Ansatz bietet enorme FlexibilitĂ€t, da Motorprimitive wiederverwendet, parametrisiert und auf unterschiedliche Weise kombiniert werden können. Neue Primitive können definiert oder gelernt werden. Ein wichtiger Aspekt dieser Arbeit ist, dass im Gegensatz zu Deep Learning und End-to-End-Lernmethoden, keine umfangreichen DatensĂ€tze benötigt werden, um neue Bewegungen zu lernen. Durch die Verwendung von Motorprimitiven kann der gleiche Modellierungsansatz fĂŒr verschiedene Roboter verwendet werden, indem die Abbildung der Primitive auf die Roboterkinematik neu definiert wird. Die Experimente zeigen, dass durch Motor- primitive die Motorsteuerung fĂŒr die Manipulation, das Greifen und die Lokomotion vereinfacht werden kann. SNNs fĂŒr Robotikanwendungen ist immer noch ein Diskussionspunkt. Es gibt keinen State-of-the-Art-Lernalgorithmus, es gibt kein Framework Ă€hnlich dem fĂŒr Deep Learning, und die Parametrisierung von SNNs ist eine Kunst. Nichtsdestotrotz können Robotikanwendungen - wie Manipulation und Greifen - Benchmarks und realistische Szenarien liefern, um neurowissenschaftliche Modelle zu validieren. Außerdem kann die Robotik die Möglichkeiten der ereignis- basierten Berechnung mit SNNs und neuromorpher Hardware nutzen. Die physikalis- che Nachbildung eines biologischen Systems, das vollstĂ€ndig mit SNNs implementiert und auf echten Robotern evaluiert wurde, kann neue Erkenntnisse darĂŒber liefern, wie der Mensch die Motorsteuerung und Sensorverarbeitung durchfĂŒhrt und wie diese in der Robotik angewendet werden können. Modellfreie Bewegungssteuerungen, inspiriert von den Mechanismen des menschlichen Gehirns, können die Programmierung von Robotern verbessern, indem sie die Steuerung adaptiver und flexibler machen

    SpiNNaker - A Spiking Neural Network Architecture

    Get PDF
    20 years in conception and 15 in construction, the SpiNNaker project has delivered the world’s largest neuromorphic computing platform incorporating over a million ARM mobile phone processors and capable of modelling spiking neural networks of the scale of a mouse brain in biological real time. This machine, hosted at the University of Manchester in the UK, is freely available under the auspices of the EU Flagship Human Brain Project. This book tells the story of the origins of the machine, its development and its deployment, and the immense software development effort that has gone into making it openly available and accessible to researchers and students the world over. It also presents exemplar applications from ‘Talk’, a SpiNNaker-controlled robotic exhibit at the Manchester Art Gallery as part of ‘The Imitation Game’, a set of works commissioned in 2016 in honour of Alan Turing, through to a way to solve hard computing problems using stochastic neural networks. The book concludes with a look to the future, and the SpiNNaker-2 machine which is yet to come

    Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 1

    Get PDF
    Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake. The workshop was held June 1-3, 1992 at the Lyndon B. Johnson Space Center in Houston, Texas. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control, and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making

    Fault Detection and Isolation In Gas Turbine Engines

    Get PDF
    Aircraft engines are complex systems that require high reliability and adequate monitoring to ensure flight safety and performance. Moreover, timely maintenance has necessitated the need for intelligent capabilities and functionalities for detection and diagnosis of anomalies and faults. In this thesis, fault diagnosis in aircraft jet engines is investigated by using intelligent-based methodologies. Two different artificial neural network schemes are introduced for this purpose. The first fault detection and isolation (FDI) scheme for an aircraft jet engine is based on the multiple model approach and utilizes dynamic neural networks (DNN). Towards this end, multiple DNNs are constructed to learn the nonlinear dynamics of the aircraft jet engine. Each DNN represents a specific operating mode of the healthy or the faulty conditions of the jet engine. The inherent challenges in fault diagnosis systems is that their performance could be excessively reduced under sensor fault and sensor degradation conditions (such as drift and noise). This thesis proposes the use of data validation and sensor fault detection to improve the performance of the overall fault diagnosis system. In this regard the concept of nonlinear principle components analysis (NPCA) is exploited by using autoassociative neural networks. The second FDI scheme is developed by using autoassociative neural networks (ANN). A parallel bank of ANNs are proposed to diagnose sensor faults as well as component faults in the aircraft jet engine. Unlike most FDI techniques, the proposed solution simultaneously accomplishes sensor faults and component faults detection and isolation (FDI) within a unified diagnostic framework. In both proposed FDI approaches, by using the residuals that are generated from the difference between each network output and the measured jet engine output as well as selection of a proper threshold for each network, criteria are established for performing the fault diagnosis of the jet engines. The fault diagnosis tasks consists of determining the time as well as the location of a fault occurrence subject to the presence of disturbances and measurement noise. Simulation results presented, demonstrate and illustrate the effective performance of our proposed neural network-based FDI strategies

    Application of neural network in control of a ball-beam balancing system

    Get PDF
    Neural networks can be considered as a massively parallel distributed processing system with the potential for ever improving performance through dynamical learning. The power of neural networks is in their ability to learn and to store knowledge. Neural networks purport to represent or simulate simplistically the activities of processes that occur in the human brain. The ability to learn is one of the main advantages that make the neural networks so attractive. The fact that it has been successfully applied in the fields of speech analysis, pattern recognition and machine vision gives a constant encouragement to the research activities conducted in the application of neural networks technique to solve engineering problems. One of the less investigated areas is control engineering. In control engineering, neural networks can be used to handle multiple input and output variables of nonlinear function and delayed feedback with high speed. The ability of neural networks to control engineering processes without prior knowledge of the system dynamic very appealing to researchers and engineers in the field. The present work concerns the application of neural network techniques to control a simple ball-beam balancing system. The ball-beam system is an inherent unstable system, in which the ball tends to move to the end of the beam. The task is to control the system so that the ball can be balance at an location of the beam within a short period of time, and the beam be kept at an horizontal position. The state of the art of neural networks and their application in control engineering has been reviewed. The computer simulation of the control system has been performed, using both the conventional Bass-gura (chapter 3) method and the neural network method. In the conventional method the system equations were established using the Lagrangian vanational principle, and Euler method has been used to integrate the equations of movement
    • 

    corecore