2,070 research outputs found

    A Symbiotic Brain-Machine Interface through Value-Based Decision Making

    Get PDF
    BACKGROUND: In the development of Brain Machine Interfaces (BMIs), there is a great need to enable users to interact with changing environments during the activities of daily life. It is expected that the number and scope of the learning tasks encountered during interaction with the environment as well as the pattern of brain activity will vary over time. These conditions, in addition to neural reorganization, pose a challenge to decoding neural commands for BMIs. We have developed a new BMI framework in which a computational agent symbiotically decoded users' intended actions by utilizing both motor commands and goal information directly from the brain through a continuous Perception-Action-Reward Cycle (PARC). METHODOLOGY: The control architecture designed was based on Actor-Critic learning, which is a PARC-based reinforcement learning method. Our neurophysiology studies in rat models suggested that Nucleus Accumbens (NAcc) contained a rich representation of goal information in terms of predicting the probability of earning reward and it could be translated into an evaluative feedback for adaptation of the decoder with high precision. Simulated neural control experiments showed that the system was able to maintain high performance in decoding neural motor commands during novel tasks or in the presence of reorganization in the neural input. We then implanted a dual micro-wire array in the primary motor cortex (M1) and the NAcc of rat brain and implemented a full closed-loop system in which robot actions were decoded from the single unit activity in M1 based on an evaluative feedback that was estimated from NAcc. CONCLUSIONS: Our results suggest that adapting the BMI decoder with an evaluative feedback that is directly extracted from the brain is a possible solution to the problem of operating BMIs in changing environments with dynamic neural signals. During closed-loop control, the agent was able to solve a reaching task by capturing the action and reward interdependency in the brain

    Benchmarking Cerebellar Control

    Get PDF
    Cerebellar models have long been advocated as viable models for robot dynamics control. Building on an increasing insight in and knowledge of the biological cerebellum, many models have been greatly refined, of which some computational models have emerged with useful properties with respect to robot dynamics control. Looking at the application side, however, there is a totally different picture. Not only is there not one robot on the market which uses anything remotely connected with cerebellar control, but even in research labs most testbeds for cerebellar models are restricted to toy problems. Such applications hardly ever exceed the complexity of a 2 DoF simulated robot arm; a task which is hardly representative for the field of robotics, or relates to realistic applications. In order to bring the amalgamation of the two fields forwards, we advocate the use of a set of robotics benchmarks, on which existing and new computational cerebellar models can be comparatively tested. It is clear that the traditional approach to solve robotics dynamics loses ground with the advancing complexity of robotic structures; there is a desire for adaptive methods which can compete as traditional control methods do for traditional robots. In this paper we try to lay down the successes and problems in the fields of cerebellar modelling as well as robot dynamics control. By analyzing the common ground, a set of benchmarks is suggested which may serve as typical robot applications for cerebellar models

    Annotated Bibliography: Anticipation

    Get PDF

    Synaptic Learning for Neuromorphic Vision - Processing Address Events with Spiking Neural Networks

    Get PDF
    Das Gehirn ĂŒbertrifft herkömmliche Computerarchitekturen in Bezug auf Energieeffizienz, Robustheit und AnpassungsfĂ€higkeit. Diese Aspekte sind auch fĂŒr neue Technologien wichtig. Es lohnt sich daher, zu untersuchen, welche biologischen Prozesse das Gehirn zu Berechnungen befĂ€higen und wie sie in Silizium umgesetzt werden können. Um sich davon inspirieren zu lassen, wie das Gehirn Berechnungen durchfĂŒhrt, ist ein Paradigmenwechsel im Vergleich zu herkömmlichen Computerarchitekturen erforderlich. TatsĂ€chlich besteht das Gehirn aus Nervenzellen, Neuronen genannt, die ĂŒber Synapsen miteinander verbunden sind und selbstorganisierte Netzwerke bilden. Neuronen und Synapsen sind komplexe dynamische Systeme, die durch biochemische und elektrische Reaktionen gesteuert werden. Infolgedessen können sie ihre Berechnungen nur auf lokale Informationen stĂŒtzen. ZusĂ€tzlich kommunizieren Neuronen untereinander mit kurzen elektrischen Impulsen, den so genannten Spikes, die sich ĂŒber Synapsen bewegen. Computational Neuroscientists versuchen, diese Berechnungen mit spikenden neuronalen Netzen zu modellieren. Wenn sie auf dedizierter neuromorpher Hardware implementiert werden, können spikende neuronale Netze wie das Gehirn schnelle, energieeffiziente Berechnungen durchfĂŒhren. Bis vor kurzem waren die Vorteile dieser Technologie aufgrund des Mangels an funktionellen Methoden zur Programmierung von spikenden neuronalen Netzen begrenzt. Lernen ist ein Paradigma fĂŒr die Programmierung von spikenden neuronalen Netzen, bei dem sich Neuronen selbst zu funktionalen Netzen organisieren. Wie im Gehirn basiert das Lernen in neuromorpher Hardware auf synaptischer PlastizitĂ€t. Synaptische PlastizitĂ€tsregeln charakterisieren Gewichtsaktualisierungen im Hinblick auf Informationen, die lokal an der Synapse anliegen. Das Lernen geschieht also kontinuierlich und online, wĂ€hrend sensorischer Input in das Netzwerk gestreamt wird. Herkömmliche tiefe neuronale Netze werden ĂŒblicherweise durch Gradientenabstieg trainiert. Die durch die biologische Lerndynamik auferlegten EinschrĂ€nkungen verhindern jedoch die Verwendung der konventionellen Backpropagation zur Berechnung der Gradienten. Beispielsweise behindern kontinuierliche Aktualisierungen den synchronen Wechsel zwischen VorwĂ€rts- und RĂŒckwĂ€rtsphasen. DarĂŒber hinaus verhindern GedĂ€chtnisbeschrĂ€nkungen, dass die Geschichte der neuronalen AktivitĂ€t im Neuron gespeichert wird, so dass Verfahren wie Backpropagation-Through-Time nicht möglich sind. Neuartige Lösungen fĂŒr diese Probleme wurden von Computational Neuroscientists innerhalb des Zeitrahmens dieser Arbeit vorgeschlagen. In dieser Arbeit werden spikende neuronaler Netzwerke entwickelt, um Aufgaben der visuomotorischen Neurorobotik zu lösen. In der Tat entwickelten sich biologische neuronale Netze ursprĂŒnglich zur Steuerung des Körpers. Die Robotik stellt also den kĂŒnstlichen Körper fĂŒr das kĂŒnstliche Gehirn zur VerfĂŒgung. Auf der einen Seite trĂ€gt diese Arbeit zu den gegenwĂ€rtigen BemĂŒhungen um das VerstĂ€ndnis des Gehirns bei, indem sie schwierige Closed-Loop-Benchmarks liefert, Ă€hnlich dem, was dem biologischen Gehirn widerfĂ€hrt. Auf der anderen Seite werden neue Wege zur Lösung traditioneller Robotik Probleme vorgestellt, die auf vom Gehirn inspirierten Paradigmen basieren. Die Forschung wird in zwei Schritten durchgefĂŒhrt. ZunĂ€chst werden vielversprechende synaptische PlastizitĂ€tsregeln identifiziert und mit ereignisbasierten Vision-Benchmarks aus der realen Welt verglichen. Zweitens werden neuartige Methoden zur Abbildung visueller ReprĂ€sentationen auf motorische Befehle vorgestellt. Neuromorphe visuelle Sensoren stellen einen wichtigen Schritt auf dem Weg zu hirninspirierten Paradigmen dar. Im Gegensatz zu herkömmlichen Kameras senden diese Sensoren Adressereignisse aus, die lokalen Änderungen der LichtintensitĂ€t entsprechen. Das ereignisbasierte Paradigma ermöglicht eine energieeffiziente und schnelle Bildverarbeitung, erfordert aber die Ableitung neuer asynchroner Algorithmen. Spikende neuronale Netze stellen eine Untergruppe von asynchronen Algorithmen dar, die vom Gehirn inspiriert und fĂŒr neuromorphe Hardwaretechnologie geeignet sind. In enger Zusammenarbeit mit Computational Neuroscientists werden erfolgreiche Methoden zum Erlernen rĂ€umlich-zeitlicher Abstraktionen aus der Adressereignisdarstellung berichtet. Es wird gezeigt, dass Top-Down-Regeln der synaptischen PlastizitĂ€t, die zur Optimierung einer objektiven Funktion abgeleitet wurden, die Bottom-Up-Regeln ĂŒbertreffen, die allein auf Beobachtungen im Gehirn basieren. Mit dieser Einsicht wird eine neue synaptische PlastizitĂ€tsregel namens "Deep Continuous Local Learning" eingefĂŒhrt, die derzeit den neuesten Stand der Technik bei ereignisbasierten Vision-Benchmarks erreicht. Diese Regel wurde wĂ€hrend eines Aufenthalts an der UniversitĂ€t von Kalifornien, Irvine, gemeinsam abgeleitet, implementiert und evaluiert. Im zweiten Teil dieser Arbeit wird der visuomotorische Kreis geschlossen, indem die gelernten visuellen ReprĂ€sentationen auf motorische Befehle abgebildet werden. Drei AnsĂ€tze werden diskutiert, um ein visuomotorisches Mapping zu erhalten: manuelle Kopplung, Belohnungs-Kopplung und Minimierung des Vorhersagefehlers. Es wird gezeigt, wie diese AnsĂ€tze, welche als synaptische PlastizitĂ€tsregeln implementiert sind, verwendet werden können, um einfache Strategien und Bewegungen zu lernen. Diese Arbeit ebnet den Weg zur Integration von hirninspirierten Berechnungsparadigmen in das Gebiet der Robotik. Es wird sogar prognostiziert, dass Fortschritte in den neuromorphen Technologien und bei den PlastizitĂ€tsregeln die Entwicklung von Hochleistungs-Lernrobotern mit geringem Energieverbrauch ermöglicht

    Sensorimotor representation learning for an "active self" in robots: A model survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in \sout{humans' world} \rev{spaces populated by people} and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyse what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration

    Sensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Projekt DEALPeer Reviewe

    Computational Properties of the Hippocampus Increase the Efficiency of Goal-Directed Foraging through Hierarchical Reinforcement Learning

    Get PDF
    Sherpa Romeo green journal; open accessThe mammalian brain is thought to use a version of Model-based Reinforcement Learning (MBRL) to guide “goal-directed” behavior, wherein animals consider goals and make plans to acquire desired outcomes. However, conventional MBRL algorithms do not fully explain animals’ ability to rapidly adapt to environmental changes, or learn multiple complex tasks. They also require extensive computation, suggesting that goal-directed behavior is cognitively expensive. We propose here that key features of processing in the hippocampus support a ïŹ‚exible MBRL mechanism for spatial navigation that is computationally efïŹcient and can adapt quickly to change. We investigate this idea by implementing a computational MBRL framework that incorporates features inspired by computational properties of the hippocampus: a hierarchical representation of space, “forward sweeps” through future spatial trajectories, and context-driven remapping of place cells. We ïŹnd that a hierarchical abstraction of space greatly reduces the computational load (mental effort) required for adaptation to changing environmental conditions, and allows efïŹcient scaling to large problems. It also allows abstract knowledge gained at high levels to guide adaptation to new obstacles. Moreover, a context-driven remapping mechanism allows learning and memory of multiple tasks. Simulating dorsal or ventral hippocampal lesions in our computational framework qualitatively reproduces behavioral deïŹcits observed in rodents with analogous lesions. The framework may thus embody key features of how the brain organizes model-based RL to efïŹciently solve navigation and other difïŹcult tasks.Ye

    Multiscale computation and dynamic attention in biological and artificial intelligence

    Get PDF
    Biological and artificial intelligence (AI) are often defined by their capacity to achieve a hierarchy of short-term and long-term goals that require incorporating information over time and space at both local and global scales. More advanced forms of this capacity involve the adaptive modulation of integration across scales, which resolve computational inefficiency and explore-exploit dilemmas at the same time. Research in neuroscience and AI have both made progress towards understanding architectures that achieve this. Insight into biological computations come from phenomena such as decision inertia, habit formation, information search, risky choices and foraging. Across these domains, the brain is equipped with mechanisms (such as the dorsal anterior cingulate and dorsolateral prefrontal cortex) that can represent and modulate across scales, both with top-down control processes and by local to global consolidation as information progresses from sensory to prefrontal areas. Paralleling these biological architectures, progress in AI is marked by innovations in dynamic multiscale modulation, moving from recurrent and convolutional neural networks—with fixed scalings—to attention, transformers, dynamic convolutions, and consciousness priors—which modulate scale to input and increase scale breadth. The use and development of these multiscale innovations in robotic agents, game AI, and natural language processing (NLP) are pushing the boundaries of AI achievements. By juxtaposing biological and artificial intelligence, the present work underscores the critical importance of multiscale processing to general intelligence, as well as highlighting innovations and differences between the future of biological and artificial intelligence
    • 

    corecore