233 research outputs found
Autonomous learning and reproduction of complex sequences: a multimodal architecture for bootstraping imitation games
This paper introduces a control architecture
for the learning of complex sequence of gestures
applied to autonomous robots. The architecture
is designed to exploit the robot internal
sensory-motor dynamics generated by
visual, proprioceptive, and predictive informations
in order to provide intuitive behaviors
in the purpose of natural interactions
with humans
A Developmental Approach for low-level Imitations
Historically, a lot of authors in psychology and in
robotics tend to separate "true imitation" and its
related high-level mechanisms which seem to be exclusive to human adult, from low-level imitations or
"mimicries" observed on babies or primates. Closely,
classical researches suppose that an imitative artificial system must be able to build a model of
the demonstrator's geometry, in order to reproduce finely the movements on each joints. Conversely, we
will advocate that if imitation is viewed as a part of a
developmental course, then (1) an artificial developing system does not need to build any internal model
of the other, to perform real-time and low-level imitations of human movements despite the related correspondence problem between man and robot and,
(2) a simple sensory-motor loop could be at the basis
of multiples heterogeneous imitative behaviors often
explained in the literature by different models
From Visuo-Motor Development to Low-level Imitation
We present the first stages of the developmental course of a robot using vision and a 5 degree of freedom robotic arm. During an exploratory behavior, the robot learns visuo-motor control of its mechanical arm. We show how a simple neural network architecture, combining elementary vision, a self-organized algorithm, and dynamical Neural Fields is able to learn and use proper associations between vision and arm movements, even if the problem is ill posed (2-D toward 3-D mapping and also mechanical redundancy between different joints). Highlighting the generic aspect of such an architecture, we show as a robotic result that it is used as a basis for simple gestural imitations of humans. Finally we show how the imitative mechanism carries on the developmental course, allowing the acquisition of more and more complex behavioral capabilities
Neurobiologically Inspired Mobile Robot Navigation and Planning
After a short review of biologically inspired navigation architectures, mainly relying on modeling the hippocampal anatomy, or at least some of its functions, we present a navigation and planning model for mobile robots. This architecture is based on a model of the hippocampal and prefrontal interactions. In particular, the system relies on the definition of a new cell type “transition cells” that encompasses traditional “place cells”
A synchrony based approach for human robot interaction
As psychologists considered synchrony as an important parameter for social interaction, we hypothesize that in the case of social interaction, people focus their attention on regions of interest where the visual stimuli are synchronized with their inner dynamics. Then, we assume that a mechanism able to detect synchrony between internal dynamics of a robot and external visual stimuli can be used as a starting point for human robot interaction. Inspired by human psychological and neurobiological data, we propose a synchrony based neural network architecture capable of selecting the robot interaction partner and of locating Focus of Attention
Robot recognizing vowels in a multimodal way
International audienceThis paper presents a sensory-motor architecture based on a neural network allowing a robot to recognize vowels in a multi-modal way thanks to human mimicking. The robot autonomously learns to associate its internal state to a human's vowel as an infant would to recognize vowel, and learn to associate congruent information
Learning to Synchronously Imitate Gestures Using Entrainment Effect
International audienceSynchronisation and coordination are omnipresent and essential in humans interactions. Because of their unavoidable and unintentional aspect, those phenomena could be the consequences of a low level mechanism: a driving force originating from external stimuli called the entrainment effect. In the light of its importance in interaction and wishing to define new HRI, we suggest to model this entrainment to highlight its efficiency for gesture learning during imitative games and for reducing the computational complexity. We will put forward the capacity of adaptation offered by the entrainment effect. Hence, we present in this paper a neural model for gesture learning by imitation using entrainment effect applied to a NAO robot interacting with a human partner
Modèle de langue visuel pour la reconnaissance de scènes
National audienceWe describe here a method to use a graph language modeling approach for imageretrieval and image categorization. Since photographic images are 2D data, we first use im- age regions (mapped to automatically induced concepts) and then spatial relationships between these regions to build a complete image graph representation. Our method deals with different scenarios, where isolated images or groups of images are used for training or testing. The results obtained on an image categorization problem show (a) that the procedure to automatically induce concepts from an image is effective, and (b) that the use of spatial relationships, in addition to concepts, for representing an image content helps improve the classifier accuracy. This approach extends the language modeling approach to information retrieval to the problem of graph-based image retrieval and categorization, without considering image annotations.Dans cet article, nous décrivons une méthode pour utiliser un modèle de langue sur des graphes pour la recherche et la catégorisation d'images. Nous utilisons des régions d'images (associées automatiquement à des concepts visuels), ainsi que des relations spatiales entre ces régions, lors de la construction de la représentation sous forme de graphe des images. Notre méthode gère différents scénarios, selon que des images isolées ou groupées soient utilisés comme base d'apprentissage ou de tests. Les résultats obtenus sur un problème de catégorisation d'images montre (a) que la procédure automatique qui associe les concepts à une image est efficace, et (b) que l'utilisation des relations spatiales, en plus des concepts, permet d'améliorer la qualité de la classification. Cette approche présente donc une extension du modèle de langue classique en recherche d'information pour traiter le problème de recherche et de catégorisation d'images représentées par des graphes sans se préoccuper des annotations d'images
Les champs neuronaux comme outil de représentation des informations visuelles
- Nous présentons ici une application des champs neuronaux pour le contrôle d'un robot que nous avons utilisé dans le cadre d'un problème d'apprentissage par imitation [5]. Nous avons utilisé les champs neuronaux pour les commandes motrices, la représentation interne de la perception du mouvement dans l'environnement et pour le choix de cibles à suivre. L'utilisation des champs neuronaux nous a permis d'avoir un comportement de suivi avec une dynamique temporelle continue allié à une capacité de bifurcation
- …