2,582 research outputs found

    A framework integrating statistical and social cues to teach a humanoid robot new skills

    Get PDF
    Bringing robots as collaborative partners into homes presents various challenges to human-robot interaction. Robots will need to interact with untrained users in environments that are originally designed for humans. Compared to their industrial homologous form, humanoid robots can not be preprogrammed with an initial set of behaviours. They should adapt their skills to a huge range of possible tasks without needing to change the environments and tools to fit their needs. The rise of these humanoids implies an inherent social dimension to this technology, where the end-users should be able to teach new skills to these robots in an intuitive manner, relying only on their experience in teaching new skills to other human partners. Our research aims at designing a generic Robot Programming by Demonstration (RPD) framework based on a probabilistic representation of the task constraints, which allows to integrate information from cross-situational statistics and from various social cues such as joint attention or vocal intonation. This paper presents our ongoing research towards bringing user- friendly human-robot teaching systems that would speed up the skill transfer process

    Off-line simulation inspires insight: a neurodynamics approach to efficient robot task learning

    Get PDF
    There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner.The work was funded by FCT - Fundacao para a Ciencia e Tecnologia, through the PhD Grants SFRH/BD/48529/2008 and SFRH/BD/41179/2007 and Project NETT: Neural Engineering Transformative Technologies, EU-FP7 ITN (nr. 289146) and the FCT-Research Center CMAT (PEst-OE/MAT/UI0013/2014)

    What is the Teacher's Role in Robot Programming by Demonstration? - Toward Benchmarks for Improved Learning

    Get PDF
    Robot programming by demonstration (RPD) covers methods by which a robot learns new skills through human guidance. We present an interactive, multimodal RPD framework using active teaching methods that places the human teacher in the robot's learning loop. Two experiments are presented in which observational learning is first used to demonstrate a manipulation skill to a HOAP-3 humanoid robot by using motion sensors attached to the teacher's body. Then, putting the robot through the motion, the teacher incrementally refines the robot's skill by moving its arms manually, providing the appropriate scaffolds to reproduce the action. An incremental teaching scenario is proposed based on insights from various fields addressing developmental, psychological, and social issues related to teaching mechanisms in humans. Based on this analysis, different benchmarks are suggested to evaluate the setup further

    Survey: Robot Programming by Demonstration

    Get PDF
    Robot PbD started about 30 years ago, growing importantly during the past decade. The rationale for moving from purely preprogrammed robots to very flexible user-based interfaces for training the robot to perform a task is three-fold. First and foremost, PbD, also referred to as {\em imitation learning} is a powerful mechanism for reducing the complexity of search spaces for learning. When observing either good or bad examples, one can reduce the search for a possible solution, by either starting the search from the observed good solution (local optima), or conversely, by eliminating from the search space what is known as a bad solution. Imitation learning is, thus, a powerful tool for enhancing and accelerating learning in both animals and artifacts. Second, imitation learning offers an implicit means of training a machine, such that explicit and tedious programming of a task by a human user can be minimized or eliminated (Figure \ref{fig:what-how}). Imitation learning is thus a ``natural'' means of interacting with a machine that would be accessible to lay people. And third, studying and modeling the coupling of perception and action, which is at the core of imitation learning, helps us to understand the mechanisms by which the self-organization of perception and action could arise during development. The reciprocal interaction of perception and action could explain how competence in motor control can be grounded in rich structure of perceptual variables, and vice versa, how the processes of perception can develop as means to create successful actions. PbD promises were thus multiple. On the one hand, one hoped that it would make the learning faster, in contrast to tedious reinforcement learning methods or trials-and-error learning. On the other hand, one expected that the methods, being user-friendly, would enhance the application of robots in human daily environments. Recent progresses in the field, which we review in this chapter, show that the field has make a leap forward the past decade toward these goals and that these promises may be fulfilled very soon

    Incremental Local Online Gaussian Mixture Regression for Imitation Learning of Multiple Tasks

    Get PDF
    International audienceGaussian Mixture Regression has been shown to be a powerful and easy-to-tune regression technique for imitation learning of constrained motor tasks in robots. Yet, current formulations are not suited when one wants a robot to learn incrementally and online a variety of new context- dependant tasks whose number and complexity is not known at programming time, and when the demonstrator is not allowed to tell the system when he introduces a new task (but rather the system should infer this from the continuous sensorimotor context). In this paper, we show that this limitation can be addressed by introducing an Incremental, Local and Online variation of Gaussian Mixture Regression (ILO-GMR) which successfully allows a simulated robot to learn incrementally and online new motor tasks through modelling them locally as dynamical systems, and able to use the sensorimotor context to cope with the absence of categorical information both during demonstrations and when a reproduction is asked to the system. Moreover, we integrate a complementary statistical technique which allows the system to incrementally learn various tasks which can be intrinsically defined in different frames of reference, which we call framings, without the need to tell the system which particular framing should be used for each task: this is inferred automatically by the system
    • …
    corecore