122 research outputs found

    What is the Teacher's Role in Robot Programming by Demonstration? - Toward Benchmarks for Improved Learning

    Get PDF
    Robot programming by demonstration (RPD) covers methods by which a robot learns new skills through human guidance. We present an interactive, multimodal RPD framework using active teaching methods that places the human teacher in the robot's learning loop. Two experiments are presented in which observational learning is first used to demonstrate a manipulation skill to a HOAP-3 humanoid robot by using motion sensors attached to the teacher's body. Then, putting the robot through the motion, the teacher incrementally refines the robot's skill by moving its arms manually, providing the appropriate scaffolds to reproduce the action. An incremental teaching scenario is proposed based on insights from various fields addressing developmental, psychological, and social issues related to teaching mechanisms in humans. Based on this analysis, different benchmarks are suggested to evaluate the setup further

    Stochastic Gesture Production and Recognition Model for a Humanoid Robot

    Get PDF
    Robot Programming by Demonstration (PbD) aims at developing adaptive and robust controllers to enable the robot to learn new skills by observing and imitating a human demonstration. While the vast majority of PbD works focused on systems that learn a specific subset of tasks, our work explores the problem of recognition, generalization, and reproduction of tasks in a unified mathematical framework. The approach makes abstraction of the task and dataset at hand to tackle the general issue of learning which of the features are the relevant ones to imitate. In this paper, we present an implementation of this framework to the determination of the optimal strategy to reproduce arbitrary gestures. The model is tested and validated on a humanoid robot, using recordings of the kinematics of the demonstrator's arm motion. The hand path and joint angle trajectories are encoded in Hidden Markov Models. The system uses the optimal prediction of the models to generate the reproduction of the motion

    A framework integrating statistical and social cues to teach a humanoid robot new skills

    Get PDF
    Bringing robots as collaborative partners into homes presents various challenges to human-robot interaction. Robots will need to interact with untrained users in environments that are originally designed for humans. Compared to their industrial homologous form, humanoid robots can not be preprogrammed with an initial set of behaviours. They should adapt their skills to a huge range of possible tasks without needing to change the environments and tools to fit their needs. The rise of these humanoids implies an inherent social dimension to this technology, where the end-users should be able to teach new skills to these robots in an intuitive manner, relying only on their experience in teaching new skills to other human partners. Our research aims at designing a generic Robot Programming by Demonstration (RPD) framework based on a probabilistic representation of the task constraints, which allows to integrate information from cross-situational statistics and from various social cues such as joint attention or vocal intonation. This paper presents our ongoing research towards bringing user- friendly human-robot teaching systems that would speed up the skill transfer process

    PDA Interface for Humanoid Robots

    Get PDF
    To fulfill a need for natural, user-friendly means of interacting and reprogramming toy and humanoid robots, a growing trend of robotics research investigates the integration of methods for gesture recognition and natural speech processing. Unfortunately, efficient methods for speech and vision processing remain computationally expensive and, thus, cannot be easily exploited on cost- and size-limited platforms. Personal Digital Assistants (PDAs) are ideal low-cost platforms to provide simple speech and vision-based communication for a robot. This paper investigates the use of Personal Digital Assistant (PDA) interfaces to provide multi-modal means of interacting with humanoid robots. We present PDA applications in which the robot can track and imitate the user's arm and head motions, and can learn a simple vocabulary to label objects and actions by associating the user's verbal utterance with the user's gestures. The PDA applications are tested on two humanoid platforms: a mini doll-shaped robot, Robota, used as an educational toy with children, and DB, a full body 30 degrees of freedom humanoid robot

    Learning of Gestures by Imitation in a Humanoid Robot

    Get PDF
    In this chapter, we explore the issue of encoding, recognizing, generalizing and reproducing arbitrary gestures. We address one major and generic issue, namely how to discover the essence of a gesture, i.e. how to find a representation of the data that encapsulates only the key aspects of the gesture, and discards the intrinsic variability across people motions. The model is tested and validated in a humanoid robot, using kinematics data of human motion. In order for the robot to learn new skills by imitation, it must be endowed with the ability to generalize over multiple demonstrations. To achieve this, the robot must encode multivariate time-dependent data in an efficient way. Principal Component Analysis and Hidden Markov Models are used to reduce the dimensionality of the dataset and to extract the primitives of the motion. The model takes inspiration in a recent trend of research that aims at defining a formal mathematical framework for imitation learning. In particular, it stresses the fact that the observed elements of a demonstration, and the organization of these elements should be stochastically described to have a robust robotic application. It bears similarities with theoretical models of animal imitation, and offers at the same time a probabilistic description of the data, more suitable for a real-world application

    On Learning the Statistical Representation of a Task and Generalizing it to Various Contexts

    Get PDF
    This paper presents an architecture for solving generically the problem of extracting the relevant features of a given task in a programming by demonstration framework and the problem of generalizing the acquired knowledge to various contexts. We validate the architecture in a series of experiments, where a human demonstrator teaches a humanoid robot simple manipulatory tasks. Extracting the relevant features of the task is solved in a two-step process of dimensionality reduction. First, the combined joint angles and hand path motions are projected into a generic latent space, composed of a mixture of Gaussians (GMM) spreading across the spatial dimensions of the motion. Second, the temporal variation of the latent representation of the motion is encoded in a Hidden Markov Model (HMM). This two- step probabilistic encoding provides a measure of the spatio-temporal correlations across the different modalities collected by the robot, which determines a metric of imitation performance. A generalization of the demonstrated trajectories is then performed using Gaussians Mixture Regression (GMR). Finally, to generalize skills across contexts, we compute formally the trajectory that optimizes the metric, given the new context and the robot's specific body constraints

    Deforming the Maxwell-Sim Algebra

    Get PDF
    The Maxwell alegbra is a non-central extension of the Poincar\'e algebra, in which the momentum generators no longer commute, but satisfy [Pμ,Pν]=Zμν[P_\mu,P_\nu]=Z_{\mu\nu}. The charges ZμνZ_{\mu\nu} commute with the momenta, and transform tensorially under the action of the angular momentum generators. If one constructs an action for a massive particle, invariant under these symmetries, one finds that it satisfies the equations of motion of a charged particle interacting with a constant electromagnetic field via the Lorentz force. In this paper, we explore the analogous constructions where one starts instead with the ISim subalgebra of Poincar\'e, this being the symmetry algebra of Very Special Relativity. It admits an analogous non-central extension, and we find that a particle action invariant under this Maxwell-Sim algebra again describes a particle subject to the ordinary Lorentz force. One can also deform the ISim algebra to DISimb_b, where bb is a non-trivial dimensionless parameter. We find that the motion described by an action invariant under the corresponding Maxwell-DISim algebra is that of a particle interacting via a Finslerian modification of the Lorentz force.Comment: Appendix on Lifshitz and Schrodinger algebras adde

    Discriminative and Adaptive Imitation in Uni-Manual and Bi-Manual Tasks

    Get PDF
    This paper addresses the problems of what to imitate and how to imitate in simple uni- and bi-manual manipulatory tasks. To solve the what to imitate issue, we use a probabilistic method, based on Hidden Markov Models, for extracting the relative importance of reproducing either the gesture or the specific hand path in a given task. This allows us to determine a metric of imitation performance. To solve the how to imitate issue, we compute the trajectory that optimizes the metric, given a set of robot's body constraints. We validate the methods in a series of experiments, where a human demonstrator teaches through kinesthetic a humanoid robot how to manipulate simple objects

    On Learning the Statistical Representation of a Task and Generalizing it to Various Contexts

    Get PDF
    This paper presents an architecture for solving generically the problem of extracting the relevant features of a given task in a programming by demonstration framework and the problem of generalizing the acquired knowledge to various contexts. We validate the architecture in a series of experiments, where a human demonstrator teaches a humanoid robot simple manipulatory tasks. Extracting the relevant features of the task is solved in a two-step process of dimensionality reduction. First, the combined joint angles and hand path motions are projected into a generic latent space, composed of a mixture of Gaussians (GMM) spreading across the spatial dimensions of the motion. Second, the temporal variation of the latent representation of the motion is encoded in a Hidden Markov Model (HMM). This two- step probabilistic encoding provides a measure of the spatio-temporal correlations across the different modalities collected by the robot, which determines a metric of imitation performance. A generalization of the demonstrated trajectories is then performed using Gaussians Mixture Regression (GMR). Finally, to generalize skills across contexts, we compute formally the trajectory that optimizes the metric, given the new context and the robot's specific body constraints
    • …
    corecore