58 research outputs found

    Robot skill learning through human demonstration and interaction

    Get PDF
    Nowadays robots are increasingly involved in more complex and less structured tasks. Therefore, it is highly desirable to develop new approaches to fast robot skill acquisition. This research is aimed to develop an overall framework for robot skill learning through human demonstration and interaction. Through low-level demonstration and interaction with humans, the robot can learn basic skills. These basic skills are treated as primitive actions. In high-level learning, the complex skills demonstrated by the human can be automatically translated into skill scripts which are executed by the robot. This dissertation summarizes my major research activities in robot skill learning. First, a framework for Programming by Demonstration (PbD) with reinforcement learning for human-robot collaborative manipulation tasks is described. With this framework, the robot can learn low level skills such as collaborating with a human to lift a table successfully and efficiently. Second, to develop a high-level skill acquisition system, we explore the use of a 3D sensor to recognize human actions. A Kinect based action recognition system is implemented which considers both object/action dependencies and the sequential constraints. Third, we extend the action recognition framework by fusing information from multimodal sensors which can recognize fine assembly actions. Fourth, a Portable Assembly Demonstration (PAD) system is built which can automatically generate skill scripts from human demonstration. The skill script includes the object type, the tool, the action used, and the assembly state. Finally, the generated skill scripts are implemented by a dual-arm robot. The proposed framework was experimentally evaluated

    Activity Recognition With Machine Learning in Manual Grinding

    Get PDF

    Grasping for the Task:Human Principles for Robot Hands

    Get PDF
    The significant advances made in the design and construction of anthropomorphic robot hands, endow them with prehensile abilities reaching that of humans. However, using these powerful hands with the same level of expertise that humans display is a big challenge for robots. Traditional approaches use finger-tip (precision) or enveloping (power) methods to generate the best force closure grasps. However, this ignores the variety of prehensile postures available to the hand and also the larger context of arm action. This thesis explores a paradigm for grasp formation based on generating oppositional pressure within the hand, which has been proposed as a functional basis for grasping in humans (MacKenzie and Iberall, 1994). A set of opposition primitives encapsulates the hand's ability to generate oppositional forces. The oppositional intention encoded in a primitive serves as a guide to match the hand to the object, quantify its functional ability and relate this to the arm. In this thesis we leverage the properties of opposition primitives to both interpret grasps formed by humans and to construct grasps for a robot considering the larger context of arm action. In the first part of the thesis we examine the hypothesis that hand representation schemes based on opposition are correlated with hand function. We propose hand-parameters describing oppositional intention and compare these with commonly used methods such as joint angles, joint synergies and shape features. We expect that opposition-based parameterizations, which take an interaction-based perspective of a grasp, are able to discriminate between grasps that are similar in shape but different in functional intent. We test this hypothesis using qualitative assessment of precision and power capabilities found in existing grasp taxonomies. The next part of the thesis presents a general method to recognize oppositional intention manifested in human grasp demonstrations. A data glove instrumented with tactile sensors is used to provide the raw information regarding hand configuration and interaction force. For a grasp combining several cooperating oppositional intentions, hand surfaces can be simultaneously involved in multiple oppositional roles. We characterize the low-level interactions between different surfaces of the hand based on captured interaction force and reconstructed hand surface geometry. This is subsequently used to separate out and prioritize multiple and possibly overlapping oppositional intentions present in the demonstrated grasp. We evaluate our method on several human subjects across a wide range of hand functions. The last part of the thesis applies the properties encoded in opposition primitives to optimize task performance of the arm, for tasks where the arm assumes the dominant role. For these tasks, choosing the strongest power grasp available (from a force-closure sense) may constrain the arm to a sub-optimal configuration. Weaker grasp components impose fewer constraints on the hand, and can therefore explore a wider region of the object relative pose space. We take advantage of this to find the good arm configurations from a task perspective. The final hand-arm configuration is obtained by trading of overall robustness in the grasp with ability of the arm to perform the task. We validate our approach, using the tasks of cutting, hammering, screw-driving and opening a bottle-cap, for both human and robotic hand-arm systems

    Generative Models for Learning Robot Manipulation Skills from Humans

    Get PDF
    A long standing goal in artificial intelligence is to make robots seamlessly interact with humans in performing everyday manipulation skills. Learning from demonstrations or imitation learning provides a promising route to bridge this gap. In contrast to direct trajectory learning from demonstrations, many problems arise in interactive robotic applications that require higher contextual level understanding of the environment. This requires learning invariant mappings in the demonstrations that can generalize across different environmental situations such as size, position, orientation of objects, viewpoint of the observer, etc. In this thesis, we address this challenge by encapsulating invariant patterns in the demonstrations using probabilistic learning models for acquiring dexterous manipulation skills. We learn the joint probability density function of the demonstrations with a hidden semi-Markov model, and smoothly follow the generated sequence of states with a linear quadratic tracking controller. The model exploits the invariant segments (also termed as sub-goals, options or actions) in the demonstrations and adapts the movement in accordance with the external environmental situations such as size, position and orientation of the objects in the environment using a task-parameterized formulation. We incorporate high-dimensional sensory data for skill acquisition by parsimoniously representing the demonstrations using statistical subspace clustering methods and exploit the coordination patterns in latent space. To adapt the models on the fly and/or teach new manipulation skills online with the streaming data, we formulate a non-parametric scalable online sequence clustering algorithm with Bayesian non-parametric mixture models to avoid the model selection problem while ensuring tractability under small variance asymptotics. We exploit the developed generative models to perform manipulation skills with remotely operated vehicles over satellite communication in the presence of communication delays and limited bandwidth. A set of task-parameterized generative models are learned from the demonstrations of different manipulation skills provided by the teleoperator. The model captures the intention of teleoperator on one hand and provides assistance in performing remote manipulation tasks on the other hand under varying environmental situations. The assistance is formulated under time-independent shared control, where the model continuously corrects the remote arm movement based on the current state of the teleoperator; and/or time-dependent autonomous control, where the model synthesizes the movement of the remote arm for autonomous skill execution. Using the proposed methodology with the two-armed Baxter robot as a mock-up for semi-autonomous teleoperation, we are able to learn manipulation skills such as opening a valve, pick-and-place an object by obstacle avoidance, hot-stabbing (a specialized underwater task akin to peg-in-a-hole task), screw-driver target snapping, and tracking a carabiner in as few as 4 - 8 demonstrations. Our study shows that the proposed manipulation assistance formulations improve the performance of the teleoperator by reducing the task errors and the execution time, while catering for the environmental differences in performing remote manipulation tasks with limited bandwidth and communication delays

    Smart Technologies for Precision Assembly

    Get PDF
    This open access book constitutes the refereed post-conference proceedings of the 9th IFIP WG 5.5 International Precision Assembly Seminar, IPAS 2020, held virtually in December 2020. The 16 revised full papers and 10 revised short papers presented together with 1 keynote paper were carefully reviewed and selected from numerous submissions. The papers address topics such as assembly design and planning; assembly operations; assembly cells and systems; human centred assembly; and assistance methods in assembly
    • …
    corecore