43 research outputs found

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    Whole-Hand Robotic Manipulation with Rolling, Sliding, and Caging

    Get PDF
    Traditional manipulation planning and modeling relies on strong assumptions about contact. Specifically, it is common to assume that contacts are fixed and do not slide. This assumption ensures that objects are stably grasped during every step of the manipulation, to avoid ejection. However, this assumption limits achievable manipulation to the feasible motion of the closed-loop kinematic chains formed by the object and fingers. To improve manipulation capability, it has been shown that relaxing contact constraints and allowing sliding can enhance dexterity. But in order to safely manipulate with shifting contacts, other safeguards must be used to protect against ejection. “Caging manipulation,” in which the object is geometrically trapped by the fingers, can be employed to guarantee that an object never leaves the hand, regardless of constantly changing contact conditions. Mechanical compliance and underactuated joint coupling, or carefully chosen design parameters, can be used to passively create a caging grasp – protecting against accidental ejection – while simultaneously manipulating with all parts of the hand. And with passive ejection avoidance, hand control schemes can be made very simple, while still accomplishing manipulation. In place of complex control, better design can be used to improve manipulation capability—by making smart choices about parameters such as phalanx length, joint stiffness, joint coupling schemes, finger frictional properties, and actuator mode of operation. I will present an approach for modeling fully actuated and underactuated whole-hand-manipulation with shifting contacts, show results demonstrating the relationship between design parameters and manipulation metrics, and show how this can produce highly dexterous manipulators

    Quasi-static Soft Fixture Analysis of Rigid and Deformable Objects

    Full text link
    We present a sampling-based approach to reasoning about the caging-based manipulation of rigid and a simplified class of deformable 3D objects subject to energy constraints. Towards this end, we propose the notion of soft fixtures extending earlier work on energy-bounded caging to include a broader set of energy function constraints and settings, such as gravitational and elastic potential energy of 3D deformable objects. Previous methods focused on establishing provably correct algorithms to compute lower bounds or analytically exact estimates of escape energy for a very restricted class of known objects with low-dimensional C-spaces, such as planar polygons. We instead propose a practical sampling-based approach that is applicable in higher-dimensional C-spaces but only produces a sequence of upper-bound estimates that, however, appear to converge rapidly to actual escape energy. We present 8 simulation experiments demonstrating the applicability of our approach to various complex quasi-static manipulation scenarios. Quantitative results indicate the effectiveness of our approach in providing upper-bound estimates for escape energy in quasi-static manipulation scenarios. Two real-world experiments also show that the computed normalized escape energy estimates appear to correlate strongly with the probability of escape of an object under randomized pose perturbation.Comment: Paper submitted to ICRA 202

    Data-Driven Grasp Synthesis—A Survey

    Get PDF
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations

    Grasping and Assembling with Modular Robots

    Get PDF
    A wide variety of problems, from manufacturing to disaster response and space exploration, can benefit from robotic systems that can firmly grasp objects or assemble various structures, particularly in difficult, dangerous environments. In this thesis, we study the two problems, robotic grasping and assembly, with a modular robotic approach that can facilitate the problems with versatility and robustness. First, this thesis develops a theoretical framework for grasping objects with customized effectors that have curved contact surfaces, with applications to modular robots. We present a collection of grasps and cages that can effectively restrain the mobility of a wide range of objects including polyhedra. Each of the grasps or cages is formed by at most three effectors. A stable grasp is obtained by simple motion planning and control. Based on the theory, we create a robotic system comprised of a modular manipulator equipped with customized end-effectors and a software suite for planning and control of the manipulator. Second, this thesis presents efficient assembly planning algorithms for constructing planar target structures collectively with a collection of homogeneous mobile modular robots. The algorithms are provably correct and address arbitrary target structures that may include internal holes. The resultant assembly plan supports parallel assembly and guarantees easy accessibility in the sense that a robot does not have to pass through a narrow gap while approaching its target position. Finally, we extend the algorithms to address various symmetric patterns formed by a collection of congruent rectangles on the plane. The basic ideas in this thesis have broad applications to manufacturing (restraint), humanitarian missions (forming airfields on the high seas), and service robotics (grasping and manipulation)

    Grasp Stability Prediction for a Dexterous Robotic Hand Combining Depth Vision and Haptic Bayesian Exploration.

    Get PDF
    Grasp stability prediction of unknown objects is crucial to enable autonomous robotic manipulation in an unstructured environment. Even if prior information about the object is available, real-time local exploration might be necessary to mitigate object modelling inaccuracies. This paper presents an approach to predict safe grasps of unknown objects using depth vision and a dexterous robot hand equipped with tactile feedback. Our approach does not assume any prior knowledge about the objects. First, an object pose estimation is obtained from RGB-D sensing; then, the object is explored haptically to maximise a given grasp metric. We compare two probabilistic methods (i.e. standard and unscented Bayesian Optimisation) against random exploration (i.e. uniform grid search). Our experimental results demonstrate that these probabilistic methods can provide confident predictions after a limited number of exploratory observations, and that unscented Bayesian Optimisation can find safer grasps, taking into account the uncertainty in robot sensing and grasp execution

    Autonomous task-based grasping for mobile manipulators

    Get PDF
    A fully integrated grasping system for a mobile manipulator to grasp an unknown object of interest (OI) in an unknown environment is presented. The system autonomously scans its environment, models the OI, plans and executes a grasp, while taking into account base pose uncertainty and obstacles in its way to reach the object. Due to inherent line of sight limitations in sensing, a single scan of the OI often does not reveal enough information to complete grasp analysis; as a result, our system autonomously builds a model of an object via multiple scans from different locations until a grasp can be performed. A volumetric next-best-view (NBV) algorithm is used to model an arbitrary object and terminates modelling when grasp poses are discovered on a partially observed object. Two key sets of experiments are presented: i) modelling and registration error in the OI point cloud model is reduced by selecting viewpoints with more scan overlap, and ii) model construction and grasps are successfully achieved while experiencing base pose uncertainty. A generalized algorithm is presented to discover grasp pose solutions for multiple grasp types for a multi-fingered mechanical gripper using sensed point clouds. The algorithm introduces two key ideas: 1) a histogram of finger contact normals is used to represent a grasp “shape” to guide a gripper orientation search in a histogram of object(s) surface normals, and 2) voxel grid representations of gripper and object(s) are cross-correlated to match finger contact points, i.e. grasp “size”, to discover a grasp pose. Constraints, such as collisions with neighbouring objects, are incorporated in the cross-correlation computation. Simulations and preliminary experiments show that 1) grasp poses for three grasp types are found in near real-time, 2) grasp pose solutions are consistent with respect to voxel resolution changes for both partial and complete point cloud scans, 3) a planned grasp pose is executed with a mechanical gripper, and 4) grasp overlap is presented as a feature to identify regions on a partial object model ideal for object transfer or securing an object
    corecore