346 research outputs found

    Exploitation of environmental constraints in human and robotic grasping

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugÀnglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.We investigate the premise that robust grasping performance is enabled by exploiting constraints present in the environment. These constraints, leveraged through motion in contact, counteract uncertainty in state variables relevant to grasp success. Given this premise, grasping becomes a process of successive exploitation of environmental constraints, until a successful grasp has been established. We present support for this view found through the analysis of human grasp behavior and by showing robust robotic grasping based on constraint-exploiting grasp strategies. Furthermore, we show that it is possible to design robotic hands with inherent capabilities for the exploitation of environmental constraints

    Exploitation of environmental constraints in human and robotic grasping

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugÀnglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.We investigate the premise that robust grasping performance is enabled by exploiting constraints present in the environment. These constraints, leveraged through motion in contact, counteract uncertainty in state variables relevant to grasp success. Given this premise, grasping becomes a process of successive exploitation of environmental constraints, until a successful grasp has been established. We present support for this view found through the analysis of human grasp behavior and by showing robust robotic grasping based on constraint-exploiting grasp strategies. Furthermore, we show that it is possible to design robotic hands with inherent capabilities for the exploitation of environmental constraints

    Grasp planning with a soft reconfigurable gripper exploiting embedded and environmental constraints

    Get PDF
    Grasping in unstructured environments requires highly adaptable and versatile hands together with strategies to exploit their features to get robust grasps. This paper presents a method to grasp objects using a novel reconfigurable soft gripper with embodied constraints, the Soft ScoopGripper (SSG). The considered grasp strategy, called scoop grasp, exploits the SSG features to perform robust grasps. The embodied constraint, i.e., a scoop, is used to slide between the object and a flat surface (e.g., a table or a wall) in contact with it. The fingers are first configured according to object geometry and then used to establish reliable contact with it. Given the object to be grasped, the proposed grasp planner chooses the best configuration of the fingers and the scoop based on the object point cloud and then suitably aligns the gripper to it

    Grasping with Soft Hands

    Get PDF
    Despite some prematurely optimistic claims, the ability of robots to grasp general objects in unstructured environments still remains far behind that of humans. This is not solely caused by differences in the mechanics of hands: indeed, we show that human use of a simple robot hand (the Pisa/IIT SoftHand) can afford capabilities that are comparable to natural grasping. It is through the observation of such human-directed robot hand operations that we realized how fundamental in everyday grasping and manipulation is the role of hand compliance, which is used to adapt to the shape of surrounding objects. Objects and environmental constraints are in turn used to functionally shape the hand, going beyond its nominal kinematic limits by exploiting structural softness. In this paper, we set out to study grasp planning for hands that are simple - in the sense of low number of actuated degrees of freedom (one for the Pisa/IIT SoftHand) - but are soft, i.e. continuously deformable in an infinity of possible shapes through interaction with objects. After general considerations on the change of paradigm in grasp planning that this setting brings about with respect to classical rigid multi-dof grasp planning, we present a procedure to extract grasp affordances for the Pisa/IIT SoftHand through physically accurate numerical simulations. The selected grasps are then successfully tested in an experimental scenario

    Modeling and Simulation of Robotic Grasping in Simulink Through Simscape Multibody

    Get PDF
    Grasping and dexterous manipulation remain fundamental challenges in robotics, above all when performed with multifingered robotic hands. Having simulation tools to design and test grasp and manipulation control strategies is paramount to get functional robotic manipulation systems. In this paper, we present a framework for modeling and simulating grasps in the Simulink environment, by connecting SynGrasp, a well established MATLAB toolbox for grasp simulation and analysis, and Simscape Multibody, a Simulink Library allowing the simulation of physical systems. The proposed approach can be used to simulate the grasp dynamics in Simscape, and then analyse the obtained grasps in SynGrasp. The devised functions and blocks can be easily customized to simulate different hands and objects

    Constrained motion planning and execution for soft robots

    Get PDF
    There are many reasons why a compliant robot is expected to perform better than a rigid one in interaction tasks, which include limitation of interaction forces, resilience to modeling errors, robustness, naturalness of motion, and energy efficiency. Most of these reasons are apparent if one thinks of how the human body interacts with its environment. However, most of the work in robotic planning and control of interaction has been traditionally developed for rigid robot models. Indeed, planning and control for compliant robots can be substantially harder. In this thesis, I propose the point of view that the difficulties encountered in planning and control for soft robots are at least in part due to the fact that the same approaches previously used for rigid robots are used as a starting point and adapted. On the opposite, if new methods are considered that start from consideration of compliance from the very beginning, the planning and control problems can be of comparable difficulty, or even substantially simpler, than their rigid counterpart. I will argue this thesis with two main examples. The first part of this thesis presents a new approach to integrate motion planning and control for robots in interaction. One of the peculiarities of interaction tasks is that the robot limbs and the environment form "closed kinematic chains". If rigid models are considered, the dynamics of robots in interaction become constrained, and Differential Algebraic Equations replace Ordinary Differential Equations, i.e. typically a much harder problem to deal with. However, in the thesis I show that this is not necessarily so. Indeed, consideration of compliance allows to have a more tractable mathematical model of interacting systems, and to introduce more sophisticated control approaches. Specifically, we present a novel geometric control scheme under which for constrained robot systems we achieve decoupled interaction control (i.e. make position errors irrelevant to force control, and viceversa). Based on this result, it is possible to decouple the planning problem in two separate aspects. On one side, we make dealing with motion planning of the constrained system easier by relaxing the geometric constraint, i.e. replacing the lower--dimensional constraint manifold with a narrow but full-dimensional boundary layer. This allows us to plan motion using state-of-the-art methods, such as RRT*, on points within the boundary layer, which we can efficiently sample. On the other side we control interaction forces, i.e. forces generated by displacements in the perpendicular direction to the tangent space of the constraint manifold. Thanks to the (locally) noninteracting control characteristic of our scheme, the two controllers can be applied separately and in sequence, so that the interaction force controller can correct for any discrepancies resulting from the boundary layer approximation used in the constrained position controller. The geometric noninteracting controller can be applied both in simulation for planning, and in real time for execution control. Moreover, while it does rely on considering a model of compliance in the system, it does not make any assumption on the amount of compliance in the system - or in other words, it applies equally well to stiff but elastic robots. The final outcome of the two-stage planner is an effective (possibly optimal from RRT*) trajectory that satisfies constraint with arbitrarily good approximation, asymptotically rejecting perturbations coming from sampled displacements. The second part of this thesis is dedicated to study grasp planning for hands that are simple -- in the sense of low number of actuated degrees of freedom -- but soft, i.e. continuously deformable in an infinity of possible shapes through interaction with objects. Once again, the use of such "soft hands" brings about a change of paradigm in grasp planning with respect to classical rigid multi-dof grasp planning, which only apparently makes the problem harder. However, in this thesis I show that thanks to the correct combination of compliance and underactuation of soft hands, together with the set of all possible physical interactions between the hand, the object and the environment, the grasping problem can be redefined. The new definition includes the possible combination of hand-object functional interactions which I address as "Enabling Constraints". The use of Enabling Constraints constitutes a rather new challenge for existing grasping algorithms: adaptation to totally or partially unknown scenes remains a difficult task, toward which only some approaches have been investigated so far. In this thesis I present a first approach to the study of this novel kind of manipulation. It is based on an accurate simulation tool and starts from the considerations that hand compliance can be used to adapt to the shape of the surrounding objects and that rather than considering the environment as and obstacle to avoid, it can be used in turn to functionally shape the hand. I show that thanks to this functionality the problem of generating grasping postures for soft hands can be reduced to grasp basic geometries (e.g. cylinders or boxes) in which the geometry of the object can be decomposed

    DeepDynamicHand: A Deep Neural Architecture for Labeling Hand Manipulation Strategies in Video Sources Exploiting Temporal Information

    Get PDF
    Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g., during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks. More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g., for planning and control of soft anthropomorphic manipulators
    • 

    corecore