49 research outputs found
Contribution of Cerebellar Sensorimotor Adaptation to Hippocampal Spatial Memory
Complementing its primary role in motor control, cerebellar learning has also a bottom-up influence on cognitive functions, where high-level representations build up from elementary sensorimotor memories. In this paper we examine the cerebellar contribution to both procedural and declarative components of spatial cognition. To do so, we model a functional interplay between the cerebellum and the hippocampal formation during goal-oriented navigation. We reinterpret and complete existing genetic behavioural observations by means of quantitative accounts that cross-link synaptic plasticity mechanisms, single cell and population coding properties, and behavioural responses. In contrast to earlier hypotheses positing only a purely procedural impact of cerebellar adaptation deficits, our results suggest a cerebellar involvement in high-level aspects of behaviour. In particular, we propose that cerebellar learning mechanisms may influence hippocampal place fields, by contributing to the path integration process. Our simulations predict differences in place-cell discharge properties between normal mice and L7-PKCI mutant mice lacking long-term depression at cerebellar parallel fibre-Purkinje cell synapses. On the behavioural level, these results suggest that, by influencing the accuracy of hippocampal spatial codes, cerebellar deficits may impact the exploration-exploitation balance during spatial navigation
Adaptive networks for robotics and the emergence of reward anticipatory circuits
Currently the central challenge facing evolutionary robotics is to determine
how best to extend the range and complexity of behaviour supported by evolved
neural systems. Implicit in the work described in this thesis is the idea that this
might best be achieved through devising neural circuits (tractable to evolutionary
exploration) that exhibit complementary functional characteristics. We concentrate
on two problem domains; locomotion and sequence learning. For locomotion
we compare the use of GasNets and other adaptive networks. For sequence learning
we introduce a novel connectionist model inspired by the role of dopamine
in the basal ganglia (commonly interpreted as a form of reinforcement learning).
This connectionist approach relies upon a new neuron model inspired by notions
of energy efficient signalling. Two reward adaptive circuit variants were investigated.
These were applied respectively to two learning problems; where action
sequences are required to take place in a strict order, and secondly, where action
sequences are robust to intermediate arbitrary states. We conclude the thesis
by proposing a formal model of functional integration, encompassing locomotion
and sequence learning, extending ideas proposed by W. Ross Ashby.
A general model of the adaptive replicator is presented, incoporating subsystems
that are tuned to continuous variation and discrete or conditional events.
Comparisons are made with Ross W. Ashby's model of ultrastability and his
ideas on adaptive behaviour. This model is intended to support our assertion
that, GasNets (and similar networks) and reward adaptive circuits of the type
presented here, are intrinsically complementary. In conclusion we present some
ideas on how the co-evolution of GasNet and reward adaptive circuits might lead
us to significant improvements in the synthesis of agents capable of exhibiting
complex adaptive behaviour
Fusion of virtual reality and brain-machine interfaces for the assessment and rehabilitation of patients with spinal cord injury
La presente tesis está centrada en la utilización de nuevas tecnologías (Interfaces Cerebro-Máquina y Realidad Virtual). En la primera parte de la tesis se describe la definición y la aplicación de un conjunto de métricas para evaluar el estado funcional de los pacientes con lesión medular en el contexto de un sistema de realidad virtual para la rehabilitación de los miembros superiores. El objetivo de este primer estudio es demostrar que la realidad virtual puede utilizarse, en combinación con sensores inerciales para rehabilitar y evaluar simultáneamente. 15 pacientes con lesión medular llevaron a cabo 3 sesiones con el sistema de realidad virtual Toyra y se aplicó el conjunto definido de métricas a las grabaciones obtenidas con los sensores inerciales. Se encontraron correlaciones entre algunas de las métricas definidas y algunas de las escalas clínicas utilizadas con frecuencia en el contexto de la rehabilitación.
En la segunda parte de la tesis se ha combinado una retroalimentación virtual con un estimulador eléctrico funcional (en adelante FES, por sus siglas en inglés Functional Electrical Stimulator), ambos controlados por un Interfaz Cerebro-Máquina (BMI por sus siglas en inglés Brain-Machine Interface), para desarrollar un nuevo tipo de enfoque terapéutico para los pacientes. El sistema ha sido utilizado por 4 pacientes con lesión medular que intentaron mover sus manos. Esta intención desencadenó simultáneamente el FES y la retroalimentación virtual, cerrando la mano de los pacientes y mostrándoles una fuente adicional de retroalimentación para complementar la terapia. Este trabajo es, de acuerdo al estado del arte revisado, el primero que integra BMI, FES y realidad virtual como terapia para pacientes con lesión medular. Se han obtenido resultados clínicos prometedores por 4 pacientes con lesión medular después de realizar 5 sesiones de terapia con el sistema, mostrando buenos niveles de precisión en las diferentes sesiones (79,13% en promedio).
En la tercera parte de la tesis se ha definido una nueva métrica para estudiar los cambios de conectividad cerebral en los pacientes con lesión medular, que incluye información de las interacciones neuronales entre diferentes áreas. El objetivo de este estudio ha sido extraer información clínicamente relevante de la actividad del EEG cuando se realizan terapias basadas en BMI
Synaptic Learning for Neuromorphic Vision - Processing Address Events with Spiking Neural Networks
Das Gehirn übertrifft herkömmliche Computerarchitekturen in Bezug auf Energieeffizienz, Robustheit und Anpassungsfähigkeit. Diese Aspekte sind auch für neue Technologien wichtig. Es lohnt sich daher, zu untersuchen, welche biologischen Prozesse das Gehirn zu Berechnungen befähigen und wie sie in Silizium umgesetzt werden können. Um sich davon inspirieren zu lassen, wie das Gehirn Berechnungen durchführt, ist ein Paradigmenwechsel im Vergleich zu herkömmlichen Computerarchitekturen erforderlich. Tatsächlich besteht das Gehirn aus Nervenzellen, Neuronen genannt, die über Synapsen miteinander verbunden sind und selbstorganisierte Netzwerke bilden.
Neuronen und Synapsen sind komplexe dynamische Systeme, die durch biochemische und elektrische Reaktionen gesteuert werden. Infolgedessen können sie ihre Berechnungen nur auf lokale Informationen stützen. Zusätzlich kommunizieren Neuronen untereinander mit kurzen elektrischen Impulsen, den so genannten Spikes, die sich über Synapsen bewegen.
Computational Neuroscientists versuchen, diese Berechnungen mit spikenden neuronalen Netzen zu modellieren. Wenn sie auf dedizierter neuromorpher Hardware implementiert werden, können spikende neuronale Netze wie das Gehirn schnelle, energieeffiziente Berechnungen durchführen. Bis vor kurzem waren die Vorteile dieser Technologie aufgrund des Mangels an funktionellen Methoden zur Programmierung von spikenden neuronalen Netzen begrenzt. Lernen ist ein Paradigma für die Programmierung von spikenden neuronalen Netzen, bei dem sich Neuronen selbst zu funktionalen Netzen organisieren.
Wie im Gehirn basiert das Lernen in neuromorpher Hardware auf synaptischer Plastizität. Synaptische Plastizitätsregeln charakterisieren Gewichtsaktualisierungen im Hinblick auf Informationen, die lokal an der Synapse anliegen. Das Lernen geschieht also kontinuierlich und online, während sensorischer Input in das Netzwerk gestreamt wird.
Herkömmliche tiefe neuronale Netze werden üblicherweise durch Gradientenabstieg trainiert. Die durch die biologische Lerndynamik auferlegten Einschränkungen verhindern jedoch die Verwendung der konventionellen Backpropagation zur Berechnung der Gradienten. Beispielsweise behindern kontinuierliche Aktualisierungen den synchronen Wechsel zwischen Vorwärts- und Rückwärtsphasen. Darüber hinaus verhindern Gedächtnisbeschränkungen, dass die Geschichte der neuronalen Aktivität im Neuron gespeichert wird, so dass Verfahren wie Backpropagation-Through-Time nicht möglich sind. Neuartige Lösungen für diese Probleme wurden von Computational Neuroscientists innerhalb des Zeitrahmens dieser Arbeit vorgeschlagen.
In dieser Arbeit werden spikende neuronaler Netzwerke entwickelt, um Aufgaben der visuomotorischen Neurorobotik zu lösen. In der Tat entwickelten sich biologische neuronale Netze ursprünglich zur Steuerung des Körpers. Die Robotik stellt also den künstlichen Körper für das künstliche Gehirn zur Verfügung. Auf der einen Seite trägt diese Arbeit zu den gegenwärtigen Bemühungen um das Verständnis des Gehirns bei, indem sie schwierige Closed-Loop-Benchmarks liefert, ähnlich dem, was dem biologischen Gehirn widerfährt. Auf der anderen Seite werden neue Wege zur Lösung traditioneller Robotik Probleme vorgestellt, die auf vom Gehirn inspirierten Paradigmen basieren. Die Forschung wird in zwei Schritten durchgeführt. Zunächst werden vielversprechende synaptische Plastizitätsregeln identifiziert und mit ereignisbasierten Vision-Benchmarks aus der realen Welt verglichen. Zweitens werden neuartige Methoden zur Abbildung visueller Repräsentationen auf motorische Befehle vorgestellt. Neuromorphe visuelle Sensoren stellen einen wichtigen Schritt auf dem Weg zu hirninspirierten Paradigmen dar. Im Gegensatz zu herkömmlichen Kameras senden diese Sensoren Adressereignisse aus, die lokalen Änderungen der Lichtintensität entsprechen. Das ereignisbasierte Paradigma ermöglicht eine energieeffiziente und schnelle Bildverarbeitung, erfordert aber die Ableitung neuer asynchroner Algorithmen. Spikende neuronale Netze stellen eine Untergruppe von asynchronen Algorithmen dar, die vom Gehirn inspiriert und für neuromorphe Hardwaretechnologie geeignet sind. In enger Zusammenarbeit mit Computational Neuroscientists werden erfolgreiche Methoden zum Erlernen räumlich-zeitlicher Abstraktionen aus der Adressereignisdarstellung berichtet. Es wird gezeigt, dass Top-Down-Regeln der synaptischen Plastizität, die zur Optimierung einer objektiven Funktion abgeleitet wurden, die Bottom-Up-Regeln übertreffen, die allein auf Beobachtungen im Gehirn basieren. Mit dieser Einsicht wird eine neue synaptische Plastizitätsregel namens "Deep Continuous Local Learning" eingeführt, die derzeit den neuesten Stand der Technik bei ereignisbasierten Vision-Benchmarks erreicht. Diese Regel wurde während eines Aufenthalts an der Universität von Kalifornien, Irvine, gemeinsam abgeleitet, implementiert und evaluiert.
Im zweiten Teil dieser Arbeit wird der visuomotorische Kreis geschlossen, indem die gelernten visuellen Repräsentationen auf motorische Befehle abgebildet werden. Drei Ansätze werden diskutiert, um ein visuomotorisches Mapping zu erhalten: manuelle Kopplung, Belohnungs-Kopplung und Minimierung des Vorhersagefehlers. Es wird gezeigt, wie diese Ansätze, welche als synaptische Plastizitätsregeln implementiert sind, verwendet werden können, um einfache Strategien und Bewegungen zu lernen. Diese Arbeit ebnet den Weg zur Integration von hirninspirierten Berechnungsparadigmen in das Gebiet der Robotik. Es wird sogar prognostiziert, dass Fortschritte in den neuromorphen Technologien und bei den Plastizitätsregeln die Entwicklung von Hochleistungs-Lernrobotern mit geringem Energieverbrauch ermöglicht
GPU Computing for Cognitive Robotics
This thesis presents the first investigation of the impact of GPU
computing on cognitive robotics by providing a series of novel experiments in
the area of action and language acquisition in humanoid robots and computer
vision. Cognitive robotics is concerned with endowing robots with high-level
cognitive capabilities to enable the achievement of complex goals in complex
environments. Reaching the ultimate goal of developing cognitive robots will
require tremendous amounts of computational power, which was until
recently provided mostly by standard CPU processors. CPU cores are
optimised for serial code execution at the expense of parallel execution, which
renders them relatively inefficient when it comes to high-performance
computing applications. The ever-increasing market demand for
high-performance, real-time 3D graphics has evolved the GPU into a highly
parallel, multithreaded, many-core processor extraordinary computational
power and very high memory bandwidth. These vast computational resources
of modern GPUs can now be used by the most of the cognitive robotics models
as they tend to be inherently parallel. Various interesting and insightful
cognitive models were developed and addressed important scientific questions
concerning action-language acquisition and computer vision. While they have
provided us with important scientific insights, their complexity and
application has not improved much over the last years. The experimental
tasks as well as the scale of these models are often minimised to avoid
excessive training times that grow exponentially with the number of neurons
and the training data. This impedes further progress and development of
complex neurocontrollers that would be able to take the cognitive robotics
research a step closer to reaching the ultimate goal of creating intelligent
machines. This thesis presents several cases where the application of the GPU
computing on cognitive robotics algorithms resulted in the development of
large-scale neurocontrollers of previously unseen complexity enabling the
conducting of the novel experiments described herein.European Commission Seventh Framework
Programm
Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009
Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In
recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence
Simulating sensorimotor systems with cortical topology
Due to the character of the original source materials and the nature of batch digitization, quality control issues may be present in this document. Please report any quality issues you encounter to [email protected], referencing the URI of the item.Includes bibliographical references.Not availabl
Recommended from our members
Network Models of the Lateral Intraparietal Area
The monkey lateral intraparietal area (LIP) is involved in visual attention and eye movements. It has traditionally been studied using extracellular recording, where often a single neuron is recorded at a time. Thus we have a wealth of correlational knowledge of what LIP neurons do, but not how or why, i.e. we do not know the circuit mechanisms and functions of the observed LIP activity. In this thesis, we have aimed to uncover the circuit mechanisms underlying LIP activity by building tightly constrained computational models.
In Part 1, we found that during two versions of a delayed-saccade task, beneath similar population average firing patterns across time lie radically different network dynamics. When neurons are not influenced by stimuli outside their receptive fields (RFs), dynamics of the high-dimensional LIP network lie predominantly in one multi-neuronal dimension, as predicted by an earlier model. However, when activity is suppressed by stimuli outside the RF, LIP dynamics markedly deviate from a single dimension. The conflicting results can be reconciled if two LIP local networks, each dominated by a single multi-neuronal activity pattern, are suppressively coupled to each other. These results demonstrate the low dimensionality of LIP local dynamics and suggest active involvement of LIP recurrent circuitry in surround suppression and, more generally, in processing attentional and movement priority and in related cognitive functions.
In Part 2, we examine the mechanisms of learning in LIP. When monkeys learn to group visual stimuli into arbitrary categories, LIP neurons become category-selective. Surprisingly, the representations of learned categories are overwhelmingly biased: while different categories are behaviorally equivalent, nearly all LIP neurons in a given animal prefer the same category. We propose that Hebbian plasticity, at the synapses to LIP from prefrontal cortex and from lower sensory areas, could lead to the development of biased representations. In our model, LIP category selectivity arises due to competition between inputs encoding different categories, and bias develops due to excitatory lateral interactions among LIP neurons. This model reproduces the different levels of category selectivity and bias observed in multiple experiments. Our results suggest that the connectivity of LIP allows it to learn the behavioral importance of stimuli in order to guide attention
Localist representation can improve efficiency for detection and counting
Almost all representations have both distributed and localist aspects, depending upon what properties of the data are being considered. With noisy data, features represented in a localist way can be detected very efficiently, and in binary representations they can be counted more efficiently than those represented in a distributed way. Brains operate in noisy environments, so the localist representation of behaviourally important events is advantageous, and fits what has been found experimentally. Distributed representations require more neurons to perform as efficiently, but they do have greater versatility
Towards Comprehensive Foundations of Computational Intelligence
Abstract. Although computational intelligence (CI) covers a vast variety of different methods it still lacks an integrative theory. Several proposals for CI foundations are discussed: computing and cognition as compression, meta-learning as search in the space of data models, (dis)similarity based methods providing a framework for such meta-learning, and a more general approach based on chains of transformations. Many useful transformations that extract information from features are discussed. Heterogeneous adaptive systems are presented as particular example of transformation-based systems, and the goal of learning is redefined to facilitate creation of simpler data models. The need to understand data structures leads to techniques for logical and prototype-based rule extraction, and to generation of multiple alternative models, while the need to increase predictive power of adaptive models leads to committees of competent models. Learning from partial observations is a natural extension towards reasoning based on perceptions, and an approach to intuitive solving of such problems is presented. Throughout the paper neurocognitive inspirations are frequently used and are especially important in modeling of the higher cognitive functions. Promising directions such as liquid and laminar computing are identified and many open problems presented.