130 research outputs found

    Programming by Demonstration on Riemannian Manifolds

    Get PDF
    This thesis presents a Riemannian approach to Programming by Demonstration (PbD). It generalizes an existing PbD method from Euclidean manifolds to Riemannian manifolds. In this abstract, we review the objectives, methods and contributions of the presented approach. OBJECTIVES PbD aims at providing a user-friendly method for skill transfer between human and robot. It enables a user to teach a robot new tasks using few demonstrations. In order to surpass simple record-and-replay, methods for PbD need to \u2018understand\u2019 what to imitate; they need to extract the functional goals of a task from the demonstration data. This is typically achieved through the application of statisticalmethods. The variety of data encountered in robotics is large. Typical manipulation tasks involve position, orientation, stiffness, force and torque data. These data are not solely Euclidean. Instead, they originate from a variety of manifolds, curved spaces that are only locally Euclidean. Elementary operations, such as summation, are not defined on manifolds. Consequently, standard statistical methods are not well suited to analyze demonstration data that originate fromnon-Euclidean manifolds. In order to effectively extract what-to-imitate, methods for PbD should take into account the underlying geometry of the demonstration manifold; they should be geometry-aware. Successful task execution does not solely depend on the control of individual task variables. By controlling variables individually, a task might fail when one is perturbed and the others do not respond. Task execution also relies on couplings among task variables. These couplings describe functional relations which are often called synergies. In order to understand what-to-imitate, PbDmethods should be able to extract and encode synergies; they should be synergetic. In unstructured environments, it is unlikely that tasks are found in the same scenario twice. The circumstances under which a task is executed\u2014the task context\u2014are more likely to differ each time it is executed. Task context does not only vary during task execution, it also varies while learning and recognizing tasks. To be effective, a robot should be able to learn, recognize and synthesize skills in a variety of familiar and unfamiliar contexts; this can be achieved when its skill representation is context-adaptive. THE RIEMANNIAN APPROACH In this thesis, we present a skill representation that is geometry-aware, synergetic and context-adaptive. The presented method is probabilistic; it assumes that demonstrations are samples from an unknown probability distribution. This distribution is approximated using a Riemannian GaussianMixtureModel (GMM). Instead of using the \u2018standard\u2019 Euclidean Gaussian, we rely on the Riemannian Gaussian\u2014 a distribution akin the Gaussian, but defined on a Riemannian manifold. A Riev mannian manifold is a manifold\u2014a curved space which is locally Euclidean\u2014that provides a notion of distance. This notion is essential for statistical methods as such methods rely on a distance measure. Examples of Riemannian manifolds in robotics are: the Euclidean spacewhich is used for spatial data, forces or torques; the spherical manifolds, which can be used for orientation data defined as unit quaternions; and Symmetric Positive Definite (SPD) manifolds, which can be used to represent stiffness and manipulability. The Riemannian Gaussian is intrinsically geometry-aware. Its definition is based on the geometry of the manifold, and therefore takes into account the manifold curvature. In robotics, the manifold structure is often known beforehand. In the case of PbD, it follows from the structure of the demonstration data. Like the Gaussian distribution, the Riemannian Gaussian is defined by a mean and covariance. The covariance describes the variance and correlation among the state variables. These can be interpreted as local functional couplings among state variables: synergies. This makes the Riemannian Gaussian synergetic. Furthermore, information encoded in multiple Riemannian Gaussians can be fused using the Riemannian product of Gaussians. This feature allows us to construct a probabilistic context-adaptive task representation. CONTRIBUTIONS In particular, this thesis presents a generalization of existing methods of PbD, namely GMM-GMR and TP-GMM. This generalization involves the definition ofMaximum Likelihood Estimate (MLE), Gaussian conditioning and Gaussian product for the Riemannian Gaussian, and the definition of ExpectationMaximization (EM) and GaussianMixture Regression (GMR) for the Riemannian GMM. In this generalization, we contributed by proposing to use parallel transport for Gaussian conditioning. Furthermore, we presented a unified approach to solve the aforementioned operations using aGauss-Newton algorithm. We demonstrated how synergies, encoded in a Riemannian Gaussian, can be transformed into synergetic control policies using standard methods for LinearQuadratic Regulator (LQR). This is achieved by formulating the LQR problem in a (Euclidean) tangent space of the Riemannian manifold. Finally, we demonstrated how the contextadaptive Task-Parameterized Gaussian Mixture Model (TP-GMM) can be used for context inference\u2014the ability to extract context from demonstration data of known tasks. Our approach is the first attempt of context inference in the light of TP-GMM. Although effective, we showed that it requires further improvements in terms of speed and reliability. The efficacy of the Riemannian approach is demonstrated in a variety of scenarios. In shared control, the Riemannian Gaussian is used to represent control intentions of a human operator and an assistive system. Doing so, the properties of the Gaussian can be employed to mix their control intentions. This yields shared-control systems that continuously re-evaluate and assign control authority based on input confidence. The context-adaptive TP-GMMis demonstrated in a Pick & Place task with changing pick and place locations, a box-taping task with changing box sizes, and a trajectory tracking task typically found in industr

    A model-based approach to robot kinematics and control using discrete factor graphs with belief propagation

    Get PDF
    Much of recent researches in robotics have shifted the focus from traditionally-specific industrial tasks to investigations of new types of robots with alternative ways of controlling them. In this paper, we describe the development of a generic method based on factor graphs to model robot kinematics. We focused on the kinematics aspect of robot control because it provides a fast and systematic solution for the robot agent to move in a dynamic environment. We developed neurally-inspired factor graph models that can be applied on two different robotic systems: a mobile platform and a robotic arm. We also demonstrated that we can extend the static model of the robotic arm into a dynamic model useful for imitating natural movements of a human hand. We tested our methods in a simulation environment as well as in scenarios involving real robots. The experimental results proved the flexibility of our proposed methods in terms of remodeling and learning, which enabled the modeled robot to perform reliably during the execution of given tasks

    Interactive Learning of Probabilistic Decision Making by Service Robots with Multiple Skill Domains

    Get PDF
    This thesis makes a contribution to autonomous service robots, centered around two aspects. The first is modeling decision making in the face of incomplete information on top of diverse basic skills of a service robot. Second, based on such a model, it is investigated, how to transfer complex decision-making knowledge into the system. Interactive learning, naturally from both demonstrations of human teachers and in interaction with objects, yields decision-making models applicable by the robot

    Learning of Generalized Manipulation Strategies in Service Robotics

    Get PDF
    This thesis makes a contribution to autonomous robotic manipulation. The core is a novel constraint-based representation of manipulation tasks suitable for flexible online motion planning. Interactive learning from natural human demonstrations is combined with parallelized optimization to enable efficient learning of complex manipulation tasks with limited training data. Prior planning results are encoded automatically into the model to reduce planning time and solve the correspondence problem

    On the role of gestures in human-robot interaction

    Get PDF
    This thesis investigates the gestural interaction problem and in particular the usage of gestures for human-robot interaction. The lack of a clear definition of the problem statement and a common terminology resulted in a fragmented field of research where building upon prior work is rare. The scope of the research presented in this thesis, therefore, consists in laying the foundation to help the community to build a more homogeneous research field. The main contributions of this thesis are twofold: (i) a taxonomy to define gestures; and (ii) an ingegneristic definition of the gestural interaction problem. The contributions resulted is a schema to represent the existing literature in a more organic way, helping future researchers to identify existing technologies and applications, also thanks to an extensive literature review. Furthermore, the defined problem has been studied in two of its specialization: (i) direct control and (ii) teaching of a robotic manipulator, which leads to the development of technological solutions for gesture sensing, detection and classification, which can possibly be applied to other contexts

    Encoding Multiple Sensor Data for Robotic Learning Skills from Multimodal Demonstration

    Get PDF
    © 2013 IEEE. Learning a task such as pushing something, where the constraints of both position and force have to be satisfied, is usually difficult for a collaborative robot. In this work, we propose a multimodal teaching-by-demonstration system which can enable the robot to perform this kind of tasks. The basic idea is to transfer the adaptation of multi-modal information from a human tutor to the robot by taking account of multiple sensor signals (i.e., motion trajectories, stiffness, and force profiles). The human tutor's stiffness is estimated based on the limb surface electromyography (EMG) signals obtained from the demonstration phase. The force profiles in Cartesian space are collected from a force/torque sensor mounted between the robot endpoint and the tool. Subsequently, the hidden semi-Markov model (HSMM) is used to encode the multiple signals in a unified manner. The correlations between position and the other three control variables (i.e., velocity, stiffness and force) are encoded with separate HSMM models. Based on the estimated parameters of the HSMM model, the Gaussian mixture regression (GMR) is then utilized to generate the expected control variables. The learned variables are further mapped into an impedance controller in the joint space through inverse kinematics for the reproduction of the task. Comparative tests have been conducted to verify the effectiveness of our approach on a Baxter robot

    Automating iterative tasks with programming by demonstration

    Get PDF
    Programming by demonstration is an end-user programming technique that allows people to create programs by showing the computer examples of what they want to do. Users do not need specialised programming skills. Instead, they instruct the computer by demonstrating examples, much as they might show another person how to do the task. Programming by demonstration empowers users to create programs that perform tedious and time-consuming computer chores. However, it is not in widespread use, and is instead confined to research applications that end users never see. This makes it difficult to evaluate programming by demonstration tools and techniques. This thesis claims that domain-independent programming by demonstration can be made available in existing applications and used to automate iterative tasks by end users. It is supported by Familiar, a domain-independent, AppleScript-based programming-by-demonstration tool embodying standard machine learning algorithms. Familiar is designed for end users, so works in the existing applications that they regularly use. The assertion that programming by demonstration can be made available in existing applications is validated by identifying the relevant platform requirements and a range of platforms that meet them. A detailed scrutiny of AppleScript highlights problems with the architecture and with many implementations, and yields a set of guidelines for designing applications that support programming-by-demonstration. An evaluation shows that end users are capable of using programming by demonstration to automate iterative tasks. However, the subjects tended to prefer other tools, choosing Familiar only when the alternatives were unsuitable or unavailable. Familiar's inferencing is evaluated on an extensive set of examples, highlighting the tasks it can perform and the functionality it requires

    Semantic Robot Programming for Taskable Goal-Directed Manipulation

    Full text link
    Autonomous robots have the potential to assist people to be more productive in factories, homes, hospitals, and similar environments. Unlike traditional industrial robots that are pre-programmed for particular tasks in controlled environments, modern autonomous robots should be able to perform arbitrary user-desired tasks. Thus, it is beneficial to provide pathways to enable users to program an arbitrary robot to perform an arbitrary task in an arbitrary world. Advances in robot Programming by Demonstration (PbD) has made it possible for end-users to program robot behavior for performing desired tasks through demonstrations. However, it still remains a challenge for users to program robot behavior in a generalizable, performant, scalable, and intuitive manner. In this dissertation, we address the problem of robot programming by demonstration in a declarative manner by introducing the concept of Semantic Robot Programming (SRP). In SRP, we focus on addressing the following challenges for robot PbD: 1) generalization across robots, tasks, and worlds, 2) robustness under partial observations of cluttered scenes, 3) efficiency in task performance as the workspace scales up, and 4) feasibly intuitive modalities of interaction for end-users to demonstrate tasks to robots. Through SRP, our objective is to enable an end-user to intuitively program a mobile manipulator by providing a workspace demonstration of the desired goal scene. We use a scene graph to semantically represent conditions on the current and goal states of the world. To estimate the scene graph given raw sensor observations, we bring together discriminative object detection and generative state estimation for the inference of object classes and poses. The proposed scene estimation method outperformed the state of the art in cluttered scenes. With SRP, we successfully enabled users to program a Fetch robot to set up a kitchen tray on a cluttered tabletop in 10 different start and goal settings. In order to scale up SRP from tabletop to large scale, we propose Contextual-Temporal Mapping (CT-Map) for semantic mapping of large scale scenes given streaming sensor observations. We model the semantic mapping problem via a Conditional Random Field (CRF), which accounts for spatial dependencies between objects. Over time, object poses and inter-object spatial relations can vary due to human activities. To deal with such dynamics, CT-Map maintains the belief over object classes and poses across an observed environment. We present CT-Map semantically mapping cluttered rooms with robustness to perceptual ambiguities, demonstrating higher accuracy on object detection and 6 DoF pose estimation compared to state-of-the-art neural network-based object detector and commonly adopted 3D registration methods. Towards SRP at the building scale, we explore notions of Generalized Object Permanence (GOP) for robots to search for objects efficiently. We state the GOP problem as the prediction of where an object can be located when it is not being directly observed by a robot. We model object permanence via a factor graph inference model, with factors representing long-term memory, short-term memory, and common sense knowledge over inter-object spatial relations. We propose the Semantic Linking Maps (SLiM) model to maintain the belief over object locations while accounting for object permanence through a CRF. Based on the belief maintained by SLiM, we present a hybrid object search strategy that enables the Fetch robot to actively search for objects on a large scale, with a higher search success rate and less search time compared to state-of-the-art search methods.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155073/1/zengzhen_1.pd

    An Analysis-Driven Rapid Design Process for Cyber-Physical Systems

    Get PDF
    corecore