18 research outputs found
Saccade learning with concurrent cortical and subcortical basal ganglia loops
The Basal Ganglia is a central structure involved in multiple cortical and
subcortical loops. Some of these loops are believed to be responsible for
saccade target selection. We study here how the very specific structural
relationships of these saccadic loops can affect the ability of learning
spatial and feature-based tasks.
We propose a model of saccade generation with reinforcement learning
capabilities based on our previous basal ganglia and superior colliculus
models. It is structured around the interactions of two parallel cortico-basal
loops and one tecto-basal loop. The two cortical loops separately deal with
spatial and non-spatial information to select targets in a concurrent way. The
subcortical loop is used to make the final target selection leading to the
production of the saccade. These different loops may work in concert or disturb
each other regarding reward maximization. Interactions between these loops and
their learning capabilities are tested on different saccade tasks.
The results show the ability of this model to correctly learn basic target
selection based on different criteria (spatial or not). Moreover the model
reproduces and explains training dependent express saccades toward targets
based on a spatial criterion.
Finally, the model predicts that in absence of prefrontal control, the
spatial loop should dominate
Design of a biologically inspired navigation system for the Psikharpax rodent robot
This work presents the development and implementation of a biologically inspired navigation system on the autonomous Psikharpax rodent robot. Our system comprises two independent navigation strategies: a taxon expert and a planning expert. The presented navigation system allows the robot to learn the optimal strategy in each situation, by relying upon a strategy selection mechanism
A biologically inspired meta-control navigation system for the Psikharpax rat robot
A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e. g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics
Differences in gaze anticipation for locomotion with and without vision
International audiencePrevious experimental studies have shown a spontaneous anticipation of locomotor trajectory by the head and gaze direction during human locomotion. This anticipatory behavior could serve several functions: an optimal selection of visual information, for instance through landmarks and optic flow, as well as trajectory planning and motor control. This would imply that anticipation remains in darkness but with different characteristics. We asked 10 participants to walk along two predefined complex trajectories (limaçon and figure eight) without any cue on the trajectory to follow. Two visual conditions were used: (i) in light and (ii) in complete darkness with eyes open. The whole body kinematics were recorded by motion capture, along with the participant's right eye movements. We showed that in darkness and in light, horizontal gaze anticipates the orientation of the head which itself anticipates the trajectory direction. However, the horizontal angular anticipation decreases by a half in darkness for both gaze and head. In both visual conditions we observed an eye nystagmus with similar properties (frequency and amplitude). The main difference comes from the fact that in light, there is a shift of the orientations of the eye nystagmus and the head in the direction of the trajectory. These results suggest that a fundamental function of gaze is to represent self motion, stabilize the perception of space during locomotion, and to simulate the future trajectory, regardless of the vision condition
Mise au point du système vibrissal du robot-rat Psikharpax et contribution à la fusion de ses capacités visuelle, auditive et tactile
To perceive the environment through multiple sensory modalities is an ability mandatory for an animal's survival. Understanding how these modalities operate and the way they are integrated in a unique representation is a major issue for neurosciences, as well as for the design of autonomous robots. The rat, for example, heavily relies on its whiskers to recognize textures or shapes and even to evaluate the size of an aperture. This sensory modality, although widely studied in biology, has generated few research efforts in robotics. Audition and vision also provide a lot of information about the environment and these three sensory modalities turn out to be highly complementary. One structure known to integrate them is the superior colliculus, a sub-cortical area common to almost all vertebrates. This structure acting as an attentional system, allows to detect and generate orienting behavior toward relevant stimuli while ignoring others. The aim of this work is to implement these sensory modalities (tactile, auditive and visual) on the robot-rat Psikharpax and to take the relevant biological knowledge into account in order to integrate them in a multi-sensory percept. We first developed an artificial whisker system allowing texture recognition on a robotic platform. We demonstrated that two apparently opposite biological theories about the encoding of tactile information may be, in fact, complementary. We then collaborated to the development of a binaural auditive system making source localization and separation possible. We demonstrated that the mechanism we used for texture recognition with whiskers can also be used for sound recognition. We also developed a visual attentional system integrating neuromimetic models of superior colliculus and basal ganglia with the ability of reinforcement learning. This model includes sub-cortical and cortical loops allowing for the learning of spatial and non-spatial features. We demonstrated that this system was able to generate saccades toward rewarding targets. Finally, we extended this attentional model to tactile and auditive modalities and demonstrated its ability to produce multi-sensory integration. We also used this model on a mobile robotic platform in order to control the orientation behavior toward multi-sensory targets associated with a reward. We conclude that this model makes it possible to manage multi-sensory stimuli in a way robust enough to be used on a mobile robot. It also generates several testable predictions.La perception de l'environnement à travers différentes modalités sensorielles est une capacité essentielle à la survie des animaux. La compréhension du fonctionnement de ces différentes modalités ainsi que du mécanisme de leur intégration en une représentation unique est un enjeu majeur en neurosciences ainsi qu'en matière de conception d'architectures de contrôle de robots autonomes. Le rat, par exemple, exploite énormément les informations tactiles fournies par ses vibrisses. Elles lui servent notamment à reconnaître des textures ou des formes, comme à évaluer la taille d'une ouverture... Cette modalité, très étudiée en biologie, n'a été que peu abordée dans le domaine de la robotique. L'audition et la vision fournissent également de riches informations sur l'environnement et ces trois modalités fonctionnent de manière complémentaire. Une des structures intégrant toutes ces modalités est le colliculus supérieur, région sous-corticale commune à la plupart des vertébrés. Cette structure fonctionnant comme un système attentionnel, permet de détecter les stimuli pertinents et de s'orienter vers ceux-ci tout en ignorant les stimuli inutiles. L'objectif de ce travail est de développer les différentes capacités sensorielles (tactile, auditive et visuelle) du robot-rat Psikharpax et de les intégrer en une représentation multi-sensorielle en s'inspirant de ces connaissances biologiques. Nous avons tout d'abord développé un système vibrissal artificiel permettant de reconnaitre des textures sur un robot mobile. Nous avons montré que deux hypothèses biologiques s'affrontant pour expliquer le codage des informations tactiles sont peut-être compatibles. Nous avons ensuite collaboré au développement d'un système auditif binaural permettant la localisation et la séparation de sources. Nous avons montré que les mécanismes permettant la reconnaissance de textures avec le système vibrissal, permettait de reconnaitre des sons avec le système auditif. Puis nous avons développé un système d'attention visuelle en adaptant et en intégrant des modèles neuro-mimétiques de colliculus supérieur et de ganglions de la base avec un mécanisme d'apprentissage par renforcement. Ce modèle inclut des boucles sous-corticales et corticales permettant l'apprentissage des caractéristiques spatiales et non-spatiales des stimuli. Nous avons montré que ce système permettait de générer des saccades oculaires vers des cibles génératrices de récompense. Enfin, nous avons étendu ce modèle d'attention visuelle aux modalités tactile et auditive et montré ses capacités à reproduire les phénomènes d'intégration multi-sensorielle. Nous avons également utilisé ce modèle sur un robot mobile pour générer des comportements d'orientation vers des stimuli multi-sensoriels associés à des récompenses. Nous concluons que ce modèle permet la gestion de stimuli multi-sensoriels de manière assez robuste pour être utilisé sur un robot mobile. Il génère de plus quelques prédictions testables
Mise au point du système vibrissal du robot-rat Psikharpax et contribution à la fusion de ses capacités visuelle, auditive et tactile
L'objectif de ce travail est de développer les différentes capacités sensorielles (tactile, auditive et visuelle) du robot-rat Psikharpax et de les intégrer en une représentation multi-sensorielle qui s'inspire des connaissances biologiques disponibles sur le sujet. Nous avons tout d'abord développé un système vibrissal artificiel permettant de reconnaitre des textures sur un robot mobile. Nous avons montré que deux hypothèses biologiques s'affrontant pour expliquer le codage des informations tactiles sont peut-être compatibles. Nous avons ensuite collabore au développement d'un système auditif binaural permettant la localisation et la séparation de sources sonores. Nous avons montré que les mécanismes permettant la reconnaissance de textures avec le système vibrissal, permettait de reconnaitre des sons avec le système auditif. Puis nous avons développé un système d'attention visuelle en adaptant et en intégrant un mécanisme d'apprentissage par renforcement dans des modèles neuro-mimétiques du colliculus supérieur et des ganglions de la base. Nous avons montré que ce système permettait de générer des saccades oculaires vers des cibles génératrices de récompense. Enfin, nous avons étendu ce modèle d'attention visuelle aux modalités tactile et auditive et montré ses capacités à reproduire certains phénomènes d'intégration multi-sensorielle. Nous avons également utilisé ce modèle sur un robot mobile pour générer des comportements d'orientation vers des stimuli multi-sensoriels associés à des récompenses. Nous concluons que ce modèle permet la gestion de stimuli multi-sensoriels de manière assez robuste pour être utilisé sur un robot mobile. Il génère, de plus, quelques prédictions testables.PARIS-BIUSJ-Mathématiques rech (751052111) / SudocSudocFranceF
An Integrated Neuromimetic Model of the Saccadic Eye Movements for the Psikharpax Robot.
International audienceWe propose an integrated model of the saccadic circuitry involved in target selection and motor command. It includes the Superior Colliculus and the Basal Ganglia in both cortical and subcortical loops. This model has spatial and feature-based learning capabilities which are demonstrated on various saccade tasks on a robotic platform. Results show that it is possible to learn to select saccades based on spatial information, feature-based information and combinations of both, without the necessity to explicitly pre-define eye-movement strategies
Rapid morphological exploration with the Poppy humanoid platform.
International audienceIn this paper we discuss the motivation and challenges raised by the desire to consider the morphology as an experimental variable on real robotic platforms as well as allowing reproducibility and diffusion in the scientific community. In this context, we present an alternative design and production methodology that we have applied to the conception of Poppy, the first complete 3D printed open-source and open-hardware humanoid robot. Robust and accessible, it allows exploring quickly and easily the fabrication, the programming and the experimentation of various robotic morphologies. Both hardware and software are open-source, and a web platform allows interdisciplinary contributions, sharing and collabora-tions. Finally we conduct an experiment to explore the impact of four different foot morphologies on the robot's dynamic when it makes a footstep. We show that such experimentation can easily be achieved and shared in couple of days at almost no cost
Behaviors obtained with quasimetric and dynamic programming methods with different discount factors.
<p>Starting from the initial stable state, both methods lead to the objective but with different trajectories.</p
Quasi-distances and Value function for example 2A.
<p>Quasi-distances and Value function for example 2A.</p