5 research outputs found

    Model-free and learning-free grasping by Local Contact Moment matching

    Get PDF
    This paper addresses the problem of grasping arbitrarily shaped objects, observed as partial point-clouds, without requiring: models of the objects, physics parameters, training data, or other a-priori knowledge. A grasp metric is proposed based on Local Contact Moment (LoCoMo). LoCoMo combines zero-moment shift features, of both hand and object surface patches, to determine local similarity. This metric is then used to search for a set of feasible grasp poses with associated grasp likelihoods. LoCoMo overcomes some limitations of both classical grasp planners and learning-based approaches. Unlike force-closure analysis, LoCoMo does not require knowledge of physical parameters such as friction coefficients, and avoids assumptions about fingertip contacts, instead enabling robust contacts of large areas of hand and object surface. Unlike more recent learning-based approaches, LoCoMo does not require training data, and does not need any prototype grasp configurations to be taught by kinesthetic demonstration. We present results of real-robot experiments grasping 21 different objects, observed by a wrist-mounted depth camera. All objects are grasped successfully when presented to the robot individually. The robot also successfully clears cluttered heaps of objects by sequentially grasping and lifting objects until none remain.</p

    Dynamic Grasp Adaptation:From Humans To Robots

    Get PDF
    The human hand is an amazing tool, demonstrated by its incredible motor capability and remarkable sense of touch. To enable robots to work in a human-centric environment, it is desirable to endow robotic hands with human-like capabilities for grasping and object manipulation. However, due to its inherent complexity and inevitable model uncertainty, robotic grasping and manipulation remains a challenge. This thesis focuses on grasp adaptation in the face of model and sensing uncertainties: Given an object whose properties are not known with certainty (e.g., shape, weight and external perturbation), and a multifingered robotic hand, we aim at determining where to put the fingers and how the fingers should adaptively interact with the object using tactile sensing, in order to achieve either a stable grasp or a desired dynamic behaviour. A central idea in this thesis is the object-centric dynamics: namely, that we express all control constraints into an object-centric representation. This simplifies computa- tion and makes the control versatile to the type of hands. This is an essential feature that distinguishes our work from other robust grasping work in the literature, where generating a static stable grasp for a given hand is usually the primary goal. In this thesis, grasp adaptation is a dynamic process that flexibly adapts the grasp to fit some purpose from the objectâs perspective, in the presence of a variety of uncertainties and/or perturbations. When building a grasp adaptation for a given situation, there are two key problems that must be addressed: 1) the problem of choosing an initial grasp that is suitable for future adaptation, and more importantly 2) the problem of design- ing an adaptation strategy that can react adequately to achieve desired behaviour of the grasped object. To address challenge 1 (planning a grasp under shape uncertainty), we propose an approach to parameterizing the uncertainty in object shape using Gaussian Processes (GPs) and incorporate it as a constraint into contact-level grasp planning. To realize the planned contacts using different hands interchangeably, we further develop a prob- abilistic model to predict the feasible hand configurations, including hand pose and finger joints, given the desired contact points only. The model is built using the con- cept of Virtual Frame(VF), and it is independent from the choice of hand frame and object frame. The performance of the proposed approach is validated on two differ- ent robotic hands, an industrial gripper (4 DOF Barrett hand) and a humanoid hand (16 DOF Allegro hand) to manipulate objects of daily use with complex geometry and various texture (a spray bottle, a tea caddy, a jug and a bunny toy). In the second part of this thesis, we propose an approach to the design of adapta- tion strategy to ensure grasp stability in the presence of physical uncertainties of objects(object weight, friction at contacts and external perturbation). Based on an object-level impedance controller, we first design a grasp stability estimator in the object frame using the grasp experience and tactile sensing. Once a grasp is predicted to be unstable during online execution, the grasp adaptation strategy is triggered to improve the grasp stability, by either changing the stiffness at finger level or relocating the position of one fingertip to a better area

    Learning-based robotic manipulation for dynamic object handling : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Mechatronic Engineering at the School of Food and Advanced Technology, Massey University, Turitea Campus, Palmerston North, New Zealand

    Get PDF
    Figures are re-used in this thesis with permission of their respective publishers or under a Creative Commons licence.Recent trends have shown that the lifecycles and production volumes of modern products are shortening. Consequently, many manufacturers subject to frequent change prefer flexible and reconfigurable production systems. Such schemes are often achieved by means of manual assembly, as conventional automated systems are perceived as lacking flexibility. Production lines that incorporate human workers are particularly common within consumer electronics and small appliances. Artificial intelligence (AI) is a possible avenue to achieve smart robotic automation in this context. In this research it is argued that a robust, autonomous object handling process plays a crucial role in future manufacturing systems that incorporate robotics—key to further closing the gap between manual and fully automated production. Novel object grasping is a difficult task, confounded by many factors including object geometry, weight distribution, friction coefficients and deformation characteristics. Sensing and actuation accuracy can also significantly impact manipulation quality. Another challenge is understanding the relationship between these factors, a specific grasping strategy, the robotic arm and the employed end-effector. Manipulation has been a central research topic within robotics for many years. Some works focus on design, i.e. specifying a gripper-object interface such that the effects of imprecise gripper placement and other confounding control-related factors are mitigated. Many universal robotic gripper designs have been considered, including 3-fingered gripper designs, anthropomorphic grippers, granular jamming end-effectors and underactuated mechanisms. While such approaches have maintained some interest, contemporary works predominantly utilise machine learning in conjunction with imaging technologies and generic force-closure end-effectors. Neural networks that utilise supervised and unsupervised learning schemes with an RGB or RGB-D input make up the bulk of publications within this field. Though many solutions have been studied, automatically generating a robust grasp configuration for objects not known a priori, remains an open-ended problem. An element of this issue relates to a lack of objective performance metrics to quantify the effectiveness of a solution—which has traditionally driven the direction of community focus by highlighting gaps in the state-of-the-art. This research employs monocular vision and deep learning to generate—and select from—a set of hypothesis grasps. A significant portion of this research relates to the process by which a final grasp is selected. Grasp synthesis is achieved by sampling the workspace using convolutional neural networks trained to recognise prospective grasp areas. Each potential pose is evaluated by the proposed method in conjunction with other input modalities—such as load-cells and an alternate perspective. To overcome human bias and build upon traditional metrics, scores are established to objectively quantify the quality of an executed grasp trial. Learning frameworks that aim to maximise for these scores are employed in the selection process to improve performance. The proposed methodology and associated metrics are empirically evaluated. A physical prototype system was constructed, employing a Dobot Magician robotic manipulator, vision enclosure, imaging system, conveyor, sensing unit and control system. Over 4,000 trials were conducted utilising 100 objects. Experimentation showed that robotic manipulation quality could be improved by 10.3% when selecting to optimise for the proposed metrics—quantified by a metric related to translational error. Trials further demonstrated a grasp success rate of 99.3% for known objects and 98.9% for objects for which a priori information is unavailable. For unknown objects, this equated to an improvement of approximately 10% relative to other similar methodologies in literature. A 5.3% reduction in grasp rate was observed when removing the metrics as selection criteria for the prototype system. The system operated at approximately 1 Hz when contemporary hardware was employed. Experimentation demonstrated that selecting a grasp pose based on the proposed metrics improved grasp rates by up to 4.6% for known objects and 2.5% for unknown objects—compared to selecting for grasp rate alone. This project was sponsored by the Richard and Mary Earle Technology Trust, the Ken and Elizabeth Powell Bursary and the Massey University Foundation. Without the financial support provided by these entities, it would not have been possible to construct the physical robotic system used for testing and experimentation. This research adds to the field of robotic manipulation, contributing to topics on grasp-induced error analysis, post-grasp error minimisation, grasp synthesis framework design and general grasp synthesis. Three journal publications and one IEEE Xplore paper have been published as a result of this research
    corecore