28 research outputs found

    Modelling and Interactional Control of a Multi-fingered Robotic Hand for Grasping and Manipulation.

    Get PDF
    PhDIn this thesis, the synthesis of a grasping and manipulation controller of the Barrett hand, which is an archetypal example of a multi-fingered robotic hand, is investigated in some detail. This synthesis involves not only the dynamic modelling of the robotic hand but also the control of the joint and workspace dynamics as well as the interaction of the hand with object it is grasping and the environment it is operating in. Grasping and manipulation of an object by a robotic hand is always challenging due to the uncertainties, associated with non-linearities of the robot dynamics, unknown location and stiffness parameters of the objects which are not structured in any sense and unknown contact mechanics during the interaction of the hand’s fingers and the object. To address these challenges, the fundamental task is to establish the mathematical model of the robot hand, model the body dynamics of the object and establish the contact mechanics between the hand and the object. A Lagrangian based mathematical model of the Barrett hand is developed for controller implementation. A physical SimMechanics based model of the Barrett hand is also developed in MATLAB/Simulink environment. A computed torque controller and an adaptive sliding model controller are designed for the hand and their performance is assessed both in the joint space and in the workspace. Stability analysis of the controllers are carried out before developing the control laws. The higher order sliding model controllers are developed for the position control assuming that the uncertainties are in place. Also, this controllers enhance the performance by reducing chattering of the control torques applied to the robot hand. A contact model is developed for the Barrett hand as its fingers grasp the object in the operating environment. The contact forces during the simulation of the interaction of the fingers with the object were monitored, for objects with different stiffness values. Position and force based impedance controllers are developed to optimise the contact force. To deal with the unknown stiffness of the environment, adaptation is implemented by identifying the impedance. An evolutionary algorithm is also used to estimate the desired impedance parameters of the dynamics of the coupled robot and compliant object. A Newton-Euler based model is developed for the rigid object body. A grasp map and a hand Jacobian are defined for the Barrett hand grasping an object. A fixed contact model with friction is considered for the grasping and the manipulation control. The compliant dynamics of Barrett hand and object is developed and the control problem is defined in terms of the contact force. An adaptive control framework is developed and implemented for different grasps and manipulation trajectories of the Barrett hand. The adaptive controller is developed in two stages: first, the unknown robot and object dynamics are estimated and second, the contact force is computed from the estimated dynamics. The stability of the controllers is ensured by applying Lyapunov’s direct method

    多指ロボットハンドシステムの柔軟性と汎用性に関する研究

    Get PDF
    早大学位記番号:新7683早稲田大

    Grasp Stability Analysis with Passive Reactions

    Get PDF
    Despite decades of research robotic manipulation systems outside of highly-structured industrial applications are still far from ubiquitous. Perhaps particularly curious is the fact that there appears to be a large divide between the theoretical grasp modeling literature and the practical manipulation community. Specifically, it appears that the most successful approaches to tasks such as pick-and-place or grasping in clutter are those that have opted for simple grippers or even suction systems instead of dexterous multi-fingered platforms. We argue that the reason for the success of these simple manipulation systemsis what we call passive stability: passive phenomena due to nonbackdrivable joints or underactuation allow for robust grasping without complex sensor feedback or controller design. While these effects are being leveraged to great effect, it appears the practical manipulation community lacks the tools to analyze them. In fact, we argue that the traditional grasp modeling theory assumes a complexity that most robotic hands do not possess and is therefore of limited applicability to the robotic hands commonly used today. We discuss these limitations of the existing grasp modeling literature and setout to develop our own tools for the analysis of passive effects in robotic grasping. We show that problems of this kind are difficult to solve due to the non-convexity of the Maximum Dissipation Principle (MDP), which is part of the Coulomb friction law. We show that for planar grasps the MDP can be decomposed into a number of piecewise convex problems, which can be solved for efficiently. Despite decades of research robotic manipulation systems outside of highlystructured industrial applications are still far from ubiquitous. Perhaps particularly curious is the fact that there appears to be a large divide between the theoretical grasp modeling literature and the practical manipulation community. Specifically, it appears that the most successful approaches to tasks such as pick-and-place or grasping in clutter are those that have opted for simple grippers or even suction systems instead of dexterous multi-fingered platforms. We argue that the reason for the success of these simple manipulation systemsis what we call passive stability: passive phenomena due to nonbackdrivable joints or underactuation allow for robust grasping without complex sensor feedback or controller design. While these effects are being leveraged to great effect, it appears the practical manipulation community lacks the tools to analyze them. In fact, we argue that the traditional grasp modeling theory assumes a complexity that most robotic hands do not possess and is therefore of limited applicability to the robotic hands commonly used today. We discuss these limitations of the existing grasp modeling literature and setout to develop our own tools for the analysis of passive effects in robotic grasping. We show that problems of this kind are difficult to solve due to the non-convexity of the Maximum Dissipation Principle (MDP), which is part of the Coulomb friction law. We show that for planar grasps the MDP can be decomposed into a number of piecewise convex problems, which can be solved for efficiently. We show that the number of these piecewise convex problems is quadratic in the number of contacts and develop a polynomial time algorithm for their enumeration. Thus, we present the first polynomial runtime algorithm for the determination of passive stability of planar grasps. For the spacial case we present the first grasp model that captures passive effects due to nonbackdrivable actuators and underactuation. Formulating the grasp model as a Mixed Integer Program we illustrate that a consequence of omitting the maximum dissipation principle from this formulation is the introduction of solutions that violate energy conservation laws and are thus unphysical. We propose a physically motivated iterative scheme to mitigate this effect and thus provide the first algorithm that allows for the determination of passive stability for spacial grasps with both fully actuated and underactuated robotic hands. We verify the accuracy of our predictions with experimental data and illustrate practical applications of our algorithm. We build upon this work and describe a convex relaxation of the Coulombfriction law and a successive hierarchical tightening approach that allows us to find solutions to the exact problem including the maximum dissipation principle. It is the first grasp stability method that allows for the efficient solution of the passive stability problem to arbitrary accuracy. The generality of our grasp model allows us to solve a wide variety of problems such as the computation of optimal actuator commands. This makes our framework a valuable tool for practical manipulation applications. Our work is relevant beyond robotic manipulation as it applies to the stability of any assembly of rigid bodies with frictional contacts, unilateral constraints and externally applied wrenches. Finally, we argue that with the advent of data-driven methods as well as theemergence of a new generation of highly sensorized hands there are opportunities for the application of the traditional grasp modeling theory to fields such as robotic in-hand manipulation through model-free reinforcement learning. We present a method that applies traditional grasp models to maintain quasi-static stability throughout a nominally model-free reinforcement learning task. We suggest that such methods can potentially reduce the sample complexity of reinforcement learning for in-hand manipulation.We show that the number of these piecewise convex problems is quadratic in the number of contacts and develop a polynomial time algorithm for their enumeration. Thus, we present the first polynomial runtime algorithm for the determination of passive stability of planar grasps

    Incorporating Human Expertise in Robot Motion Learning and Synthesis

    Get PDF
    With the exponential growth of robotics and the fast development of their advanced cognitive and motor capabilities, one can start to envision humans and robots jointly working together in unstructured environments. Yet, for that to be possible, robots need to be programmed for such types of complex scenarios, which demands significant domain knowledge in robotics and control. One viable approach to enable robots to acquire skills in a more flexible and efficient way is by giving them the capabilities of autonomously learn from human demonstrations and expertise through interaction. Such framework helps to make the creation of skills in robots more social and less demanding on programing and robotics expertise. Yet, current imitation learning approaches suffer from significant limitations, mainly about the flexibility and efficiency for representing, learning and reasoning about motor tasks. This thesis addresses this problem by exploring cost-function-based approaches to learning robot motion control, perception and the interplay between them. To begin with, the thesis proposes an efficient probabilistic algorithm to learn an impedance controller to accommodate motion contacts. The learning algorithm is able to incorporate important domain constraints, e.g., about force representation and decomposition, which are nontrivial to handle by standard techniques. Compliant handwriting motions are developed on an articulated robot arm and a multi-fingered hand. This work provides a flexible approach to learn robot motion conforming to both task and domain constraints. Furthermore, the thesis also contributes with techniques to learn from and reason about demonstrations with partial observability. The proposed approach combines inverse optimal control and ensemble methods, yielding a tractable learning of cost functions with latent variables. Two task priors are further incorporated. The first human kinematics prior results in a model which synthesizes rich and believable dynamical handwriting. The latter prior enforces dynamics on the latent variable and facilitates a real-time human intention cognition and an on-line motion adaptation in collaborative robot tasks. Finally, the thesis establishes a link between control and perception modalities. This work offers an analysis that bridges inverse optimal control and deep generative model, as well as a novel algorithm that learns cost features and embeds the modal coupling prior. This work contributes an end-to-end system for synthesizing arm joint motion from letter image pixels. The results highlight its robustness against noisy and out-of-sample sensory inputs. Overall, the proposed approach endows robots the potential to reason about diverse unstructured data, which is nowadays pervasive but hard to process for current imitation learning

    An adaptive framework for changing-contact robot manipulation

    Get PDF
    Many robot manipulation tasks require the robot to make and break contact with other objects in the environment. The interaction dynamics of such tasks vary markedly before and after contact. They are also strongly influenced by the nature and physical properties of the objects involved, i.e., by factors such as type of contact, surface friction, and applied force. Many industrial assembly tasks and human manipulation tasks, e.g., peg insertion, stacking, and screwing, are instances of such `changing-contact' manipulation tasks. In such tasks, the interaction dynamics is discontinuous when the robot makes or breaks contact but smooth at other times, making it a piecewise continuous dynamical system. The discontinuities experienced by a robot during such tasks can be harmful to the robot and/or object. Designing a framework for smooth online control of changing-contact manipulation tasks is a challenging open problem. To complete any manipulation task without data-intensive pre-training, the robot has to plan a motion trajectory, and execute this trajectory accurately and smoothly. Many methods have been developed for the former part of the problem in the form of planners that compute a suitable trajectory while considering relevant motion constraints and environmental obstacles. This thesis focuses on the relatively less-explored latter (i.e., plan execution) part of the problem in the context of changing-contact manipulation tasks. It does so by developing an adaptive, task-space, hybrid control framework that enables efficient, smooth, and accurate following of any given motion trajectory in the presence of piecewise continuous interaction dynamics. The framework makes three key contributions. The first contribution of this thesis addresses the problem of controlling a robot performing continuous-contact tasks in the presence of smoothly-changing environment dynamics. Specifically, we provide a task-space control framework that incrementally models and predicts the end-effector wrenches, and uses the discrepancies between the predicted and measured values to revise the predictive (forward) model and to achieve smooth trajectory tracking by adapting the impedance parameters of a force-motion controller. The second contribution of the thesis expands our framework to handle interaction dynamics that can be discontinuous due to making and breaking of contacts or due to discrete changes in the environment. We formulate the piecewise continuous interaction dynamics of the robot as a hybrid dynamical system with previously unknown discrete dynamic modes. We propose a corresponding hybrid framework that incrementally identifies new or existing modes, and adapts the parameters of the dynamics models within each such mode to provide smooth and accurate tracking of the target motion trajectory. The third contribution of the thesis focuses on handling contact changes and reducing discontinuities in the interaction dynamics during mode transitions. Specifically, we develop a framework with a contact anticipation model that incrementally and probabilistically updates its estimates of when contact changes occur due to making or breaking contact, or changes in the properties of objects. The estimated contact positions are used to guide a transition to (and from) special `transition phase' controllers whose parameters are adapted online to minimise discontinuities (i.e., to minimise spikes in force, jerk etc) in the regions of anticipated contacts. The stated contributions and each part of the framework are grounded and evaluated in simulation and on a physical robot performing illustrative changing-contact manipulation tasks on a tabletop. We experimentally compare our framework with some baselines to demonstrate the importance of building an incremental, adaptive framework for such tasks. In particular, we compare our controller for continuous-contact tasks with representative baselines in the adaptive control literature, and demonstrate the benefits of an incrementally-updated predictive (forward) model. We also experimentally evaluate the ability of our hybrid framework to accurately identify and model the dynamics of discrete dynamic (contact) modes, and justify the need for online updates by comparing the performance of a state of the art offline methods for hybrid dynamical systems. Finally, we evaluate the ability of our framework to accurately estimate contact positions and minimise discontinuities in the interaction dynamics in motion trajectories involving multiple contact changes

    Imitation Learning of Motion Coordination in Robots:a Dynamical System Approach

    Get PDF
    The ease with which humans coordinate all their limbs is fascinating. Such a simplicity is the result of a complex process of motor coordination, i.e. the ability to resolve the biomechanical redundancy in an efficient and repeatable manner. Coordination enables a wide variety of everyday human activities from filling in a glass with water to pair figure skating. Therefore, it is highly desirable to endow robots with similar skills. Despite the apparent diversity of coordinated motions, all of them share a crucial similarity: these motions are dictated by underlying constraints. The constraints shape the formation of the coordination patterns between the different degrees of freedom. Coordination constraints may take a spatio-temporal form; for instance, during bimanual object reaching or while catching a ball on the fly. They also may relate to the dynamics of the task; for instance, when one applies a specific force profile to carry a load. In this thesis, we develop a framework for teaching coordination skills to robots. Coordination may take different forms, here, we focus on teaching a robot intra-limb and bimanual coordination, as well as coordination with a human during physical collaborative tasks. We use tools from well-established domains of Bayesian semiparametric learning (Gaussian Mixture Models and Regression, Hidden Markov Models), nonlinear dynamics, and adaptive control. We take a biologically inspired approach to robot control. Specifically, we adopt an imitation learning perspective to skill transfer, that offers a seamless and intuitive way of capturing the constraints contained in natural human movements. As the robot is taught from motion data provided by a human teacher, we exploit evidence from human motor control of the temporal evolution of human motions that may be described by dynamical systems. Throughout this thesis, we demonstrate that the dynamical system view on movement formation facilitates coordination control in robots. We explain how our framework for teaching coordination to a robot is built up, starting from intra-limb coordination and control, moving to bimanual coordination, and finally to physical interaction with a human. The dissertation opens with the discussion of learning discrete task-level coordination patterns, such as spatio-temporal constraints emerging between the two arms in bimanual manipulation tasks. The encoding of bimanual constraints occurs at the task level and proceeds through a discretization of the task as sequences of bimanual constraints. Once the constraints are learned, the robot utilizes them to couple the two dynamical systems that generate kinematic trajectories for the hands. Explicit coupling of the dynamical systems ensures accurate reproduction of the learned constraints, and proves to be crucial for successful accomplishment of the task. In the second part of this thesis, we consider learning one-arm control policies. We present an approach to extracting non-linear autonomous dynamical systems from kinematic data of arbitrary point-to-point motions. The proposed method aims to tackle the fundamental questions of learning robot coordination: (i) how to infer a motion representation that captures a multivariate coordination pattern between degrees of freedom and that generalizes this pattern to unseen contexts; (ii) whether the policy learned directly from demonstrations can provide robustness against spatial and temporal perturbations. Finally, we demonstrate that the developed dynamical system approach to coordination may go beyond kinematic motion learning. We consider physical interactions between a robot and a human in situations where they jointly perform manipulation tasks; in particular, the problem of collaborative carrying and positioning of a load. We extend the approach proposed in the second part of this thesis to incorporate haptic information into the learning process. As a result, the robot adapts its kinematic motion plan according to human intentions expressed through the haptic signals. Even after the robot has learned the task model, the human still remains a complex contact environment. To ensure robustness of the robot behavior in the face of the variability inherent to human movements, we wrap the learned task model in an adaptive impedance controller with automatic gain tuning. The techniques, developed in this thesis, have been applied to enable learning of unimanual and bimanual manipulation tasks on the robotics platforms HOAP-3, KATANA, and i-Cub, as well as to endow a pair of simulated robots with the ability to perform a manipulation task in the physical collaboration
    corecore