61 research outputs found

    A nonparametric Bayesian approach toward robot learning by demonstration

    No full text
    In the past years, many authors have considered application of machine learning methodologies to effect robot learning by demonstration. Gaussian mixture regression (GMR) is one of the most successful methodologies used for this purpose. A major limitation of GMR models concerns automatic selection of the proper number of model states, i.e., the number of model component densities. Existing methods, including likelihood- or entropy-based criteria, usually tend to yield noisy model size estimates while imposing heavy computational requirements. Recently, Dirichlet process (infinite) mixture models have emerged in the cornerstone of nonparametric Bayesian statistics as promising candidates for clustering applications where the number of clusters is unknown a priori. Under this motivation, to resolve the aforementioned issues of GMR-based methods for robot learning by demonstration, in this paper we introduce a nonparametric Bayesian formulation for the GMR model, the Dirichlet process GMR model. We derive an efficient variational Bayesian inference algorithm for the proposed model, and we experimentally investigate its efficacy as a robot learning by demonstration methodology, considering a number of demanding robot learning by demonstration scenarios

    Modeling of human movement for the generation of humanoid robot motion

    Get PDF
    La robotique humanoïde arrive a maturité avec des robots plus rapides et plus précis. Pour faire face à la complexité mécanique, la recherche a commencé à regarder au-delà du cadre habituel de la robotique, vers les sciences de la vie, afin de mieux organiser le contrôle du mouvement. Cette thèse explore le lien entre mouvement humain et le contrôle des systèmes anthropomorphes tels que les robots humanoïdes. Tout d’abord, en utilisant des méthodes classiques de la robotique, telles que l’optimisation, nous étudions les principes qui sont à la base de mouvements répétitifs humains, tels que ceux effectués lorsqu’on joue au yoyo. Nous nous concentrons ensuite sur la locomotion en nous inspirant de résultats en neurosciences qui mettent en évidence le rôle de la tête dans la marche humaine. En développant une interface permettant à un utilisateur de commander la tête du robot, nous proposons une méthode de contrôle du mouvement corps-complet d’un robot humanoïde, incluant la production de pas et permettant au corps de suivre le mouvement de la tête. Cette idée est poursuivie dans l’étude finale dans laquelle nous analysons la locomotion de sujets humains, dirigée vers une cible, afin d’extraire des caractéristiques du mouvement sous forme invariants. En faisant le lien entre la notion “d’invariant” en neurosciences et celle de “tâche cinématique” en robotique humanoïde, nous développons une méthode pour produire une locomotion réaliste pour d’autres systèmes anthropomorphes. Dans ce cas, les résultats sont illustrés sur le robot humanoïde HRP2 du LAAS-CNRS. La contribution générale de cette thèse est de montrer que, bien que la planification de mouvement pour les robots humanoïdes peut être traitée par des méthodes classiques de robotique, la production de mouvements réalistes nécessite de combiner ces méthodes à l’observation systématique et formelle du comportement humain. ABSTRACT : Humanoid robotics is coming of age with faster and more agile robots. To compliment the physical complexity of humanoid robots, the robotics algorithms being developed to derive their motion have also become progressively complex. The work in this thesis spans across two research fields, human neuroscience and humanoid robotics, and brings some ideas from the former to aid the latter. By exploring the anthropological link between the structure of a human and that of a humanoid robot we aim to guide conventional robotics methods like local optimization and task-based inverse kinematics towards more realistic human-like solutions. First, we look at dynamic manipulation of human hand trajectories while playing with a yoyo. By recording human yoyo playing, we identify the control scheme used as well as a detailed dynamic model of the hand-yoyo system. Using optimization this model is then used to implement stable yoyo-playing within the kinematic and dynamic limits of the humanoid HRP-2. The thesis then extends its focus to human and humanoid locomotion. We take inspiration from human neuroscience research on the role of the head in human walking and implement a humanoid robotics analogy to this. By allowing a user to steer the head of a humanoid, we develop a control method to generate deliberative whole-body humanoid motion including stepping, purely as a consequence of the head movement. This idea of understanding locomotion as a consequence of reaching a goal is extended in the final study where we look at human motion in more detail. Here, we aim to draw to a link between “invariants” in neuroscience and “kinematic tasks” in humanoid robotics. We record and extract stereotypical characteristics of human movements during a walking and grasping task. These results are then normalized and generalized such that they can be regenerated for other anthropomorphic figures with different kinematic limits than that of humans. The final experiments show a generalized stack of tasks that can generate realistic walking and grasping motion for the humanoid HRP-2. The general contribution of this thesis is in showing that while motion planning for humanoid robots can be tackled by classical methods of robotics, the production of realistic movements necessitate the combination of these methods with the systematic and formal observation of human behavior

    Towards gestural understanding for intelligent robots

    Get PDF
    Fritsch JN. Towards gestural understanding for intelligent robots. Bielefeld: Universität Bielefeld; 2012.A strong driving force of scientific progress in the technical sciences is the quest for systems that assist humans in their daily life and make their life easier and more enjoyable. Nowadays smartphones are probably the most typical instances of such systems. Another class of systems that is getting increasing attention are intelligent robots. Instead of offering a smartphone touch screen to select actions, these systems are intended to offer a more natural human-machine interface to their users. Out of the large range of actions performed by humans, gestures performed with the hands play a very important role especially when humans interact with their direct surrounding like, e.g., pointing to an object or manipulating it. Consequently, a robot has to understand such gestures to offer an intuitive interface. Gestural understanding is, therefore, a key capability on the way to intelligent robots. This book deals with vision-based approaches for gestural understanding. Over the past two decades, this has been an intensive field of research which has resulted in a variety of algorithms to analyze human hand motions. Following a categorization of different gesture types and a review of other sensing techniques, the design of vision systems that achieve hand gesture understanding for intelligent robots is analyzed. For each of the individual algorithmic steps – hand detection, hand tracking, and trajectory-based gesture recognition – a separate Chapter introduces common techniques and algorithms and provides example methods. The resulting recognition algorithms are considering gestures in isolation and are often not sufficient for interacting with a robot who can only understand such gestures when incorporating the context like, e.g., what object was pointed at or manipulated. Going beyond a purely trajectory-based gesture recognition by incorporating context is an important prerequisite to achieve gesture understanding and is addressed explicitly in a separate Chapter of this book. Two types of context, user-provided context and situational context, are reviewed and existing approaches to incorporate context for gestural understanding are reviewed. Example approaches for both context types provide a deeper algorithmic insight into this field of research. An overview of recent robots capable of gesture recognition and understanding summarizes the currently realized human-robot interaction quality. The approaches for gesture understanding covered in this book are manually designed while humans learn to recognize gestures automatically during growing up. Promising research targeted at analyzing developmental learning in children in order to mimic this capability in technical systems is highlighted in the last Chapter completing this book as this research direction may be highly influential for creating future gesture understanding systems

    Learning control policies from constrained motion

    Get PDF
    Many everyday human skills can be framed in terms of performing some task subject to constraints imposed by the task or the environment. Constraints are usually unobservable and frequently change between contexts. In this thesis, we explore the problem of learning control policies from data containing variable, dynamic and non-linear constraints on motion. We show that an effective approach for doing this is to learn the unconstrained policy in a way that is consistent with the constraints. We propose several novel algorithms for extracting these policies from movement data, where observations are recorded under different constraints. Furthermore, we show that, by doing so, we are able to learn representations of movement that generalise over constraints and can predict behaviour under new constraints. In our experiments, we test the algorithms on systems of varying size and complexity, and show that the novel approaches give significant improvements in performance compared with standard policy learning approaches that are naive to the effect of constraints. Finally, we illustrate the utility of the approaches for learning from human motion capture data and transferring behaviour to several robotic platforms

    Statistical Learning by Imitation of Competing Constraints in Joint Space and Task Space

    Get PDF
    We present a probabilistic architecture for solving generically the problem of extracting the task constraints through a Programming by Demonstration (PbD) framework and for generalizing the acquired knowledge to various situations. In previous work, we proposed an approach based on Gaussian Mixture Regression (GMR) to find a controller for the robot reproducing the statistical characteristics of a movement in joint space and in task space through Lagrange optimization. In this paper, we develop an alternative procedure to handle simultaneously constraints in joint space and in task space by combining directly the probabilistic representation of the task constraints with a solution to Jacobian-based inverse kinematics. The method is validated in manipulation tasks with two 5 DOFs Katana robotic arms displacing a set of objects

    Parametric Human Movements:Learning, Synthesis, Recognition, and Tracking

    Get PDF

    A Quantum-Statistical Approach Toward Robot Learning by Demonstration

    No full text
    Statistical machine learning approaches have been at the epicenter of the ongoing research work in the field of robot learning by demonstration over the past few years. One of the most successful methodologies used for this purpose is a Gaussian mixture regression (GMR). In this paper, we propose an extension of GMR-based learning by demonstration models to incorporate concepts from the field of quantum mechanics. Indeed, conventional GMR models are formulated under the notion that all the observed data points can be assigned to a distinct number of model states (mixture components). In this paper, we reformulate GMR models, introducing some quantum states constructed by superposing conventional GMR states by means of linear combinations. The so-obtained quantum statistics-inspired mixture regression algorithm is subsequently applied to obtain a novel robot learning by demonstration methodology, offering a significantly increased quality of regenerated trajectories for computational costs comparable with currently state-of-the-art trajectory-based robot learning by demonstration approaches. We experimentally demonstrate the efficacy of the proposed approach

    Survey: Robot Programming by Demonstration

    Get PDF
    Robot PbD started about 30 years ago, growing importantly during the past decade. The rationale for moving from purely preprogrammed robots to very flexible user-based interfaces for training the robot to perform a task is three-fold. First and foremost, PbD, also referred to as {\em imitation learning} is a powerful mechanism for reducing the complexity of search spaces for learning. When observing either good or bad examples, one can reduce the search for a possible solution, by either starting the search from the observed good solution (local optima), or conversely, by eliminating from the search space what is known as a bad solution. Imitation learning is, thus, a powerful tool for enhancing and accelerating learning in both animals and artifacts. Second, imitation learning offers an implicit means of training a machine, such that explicit and tedious programming of a task by a human user can be minimized or eliminated (Figure \ref{fig:what-how}). Imitation learning is thus a ``natural'' means of interacting with a machine that would be accessible to lay people. And third, studying and modeling the coupling of perception and action, which is at the core of imitation learning, helps us to understand the mechanisms by which the self-organization of perception and action could arise during development. The reciprocal interaction of perception and action could explain how competence in motor control can be grounded in rich structure of perceptual variables, and vice versa, how the processes of perception can develop as means to create successful actions. PbD promises were thus multiple. On the one hand, one hoped that it would make the learning faster, in contrast to tedious reinforcement learning methods or trials-and-error learning. On the other hand, one expected that the methods, being user-friendly, would enhance the application of robots in human daily environments. Recent progresses in the field, which we review in this chapter, show that the field has make a leap forward the past decade toward these goals and that these promises may be fulfilled very soon

    Robot Learning from Demonstration in Robotic Assembly: A Survey

    Get PDF
    Learning from demonstration (LfD) has been used to help robots to implement manipulation tasks autonomously, in particular, to learn manipulation behaviors from observing the motion executed by human demonstrators. This paper reviews recent research and development in the field of LfD. The main focus is placed on how to demonstrate the example behaviors to the robot in assembly operations, and how to extract the manipulation features for robot learning and generating imitative behaviors. Diverse metrics are analyzed to evaluate the performance of robot imitation learning. Specifically, the application of LfD in robotic assembly is a focal point in this paper
    corecore