14 research outputs found

    Domain Randomization and Generative Models for Robotic Grasping

    Full text link
    Deep learning-based robotic grasping has made significant progress thanks to algorithmic improvements and increased data availability. However, state-of-the-art models are often trained on as few as hundreds or thousands of unique object instances, and as a result generalization can be a challenge. In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis. We generate millions of unique, unrealistic procedurally generated objects, and train a deep neural network to perform grasp planning on these objects. Since the distribution of successful grasps for a given object can be highly multimodal, we propose an autoregressive grasp planning model that maps sensor inputs of a scene to a probability distribution over possible grasps. This model allows us to sample grasps efficiently at test time (or avoid sampling entirely). We evaluate our model architecture and data generation pipeline in simulation and the real world. We find we can achieve a >>90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects. We also demonstrate an 80% success rate on real-world grasp attempts despite having only been trained on random simulated objects.Comment: 8 pages, 11 figures. Submitted to 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018

    Benchmarking anthropomorphic hands through grasping simulations

    Get PDF
    In recent decades, the design of anthropomorphic hands has been developed greatly improving both cosmesis and functionality. Experimentation, simulation, and combined approaches have been used in the literature to assess the effect of design alternatives (DAs) on the final performance of artificial hands. However, establishing standard benchmarks for grasping and manipulation is a need recognized among the robotics community. Experimental approaches are costly, time consuming, and inconvenient in early design stages. Alternatively, computer simulation with the adaptation of metrics based on experimental benchmarks for anthropomorphic hands could be useful to evaluate and rank DAs. The aim of this study is to compare the anthropomorphism of the grasps performed with 28 DAs of the IMMA hand, developed by the authors, using either (i) the brute-force approach and grasp quality metrics proposed in previous works or (ii) a new simulation benchmark approach. The new methodology involves the generation of efficient grasp hypotheses and the definition of a new metric to assess stability and human likeness for the most frequently used grasp types in activities of daily living, pulp pinch and cylindrical grip, adapting the experimental Anthropomorphic Hand Assessment Protocol to the simulation environment. This new simulation benchmark, in contrast to the other approach, resulted in anthropomorphic and more realistic grasps for the expected use of the objects. Despite the inherent limitations of a simulation analysis, the benchmark proposed provides interesting results for selecting optimal DAs in order to perform stable and anthropomorphic grasps

    Movement Speed Models of Natural Grasp and Release Used for an Industrial Robot Equipped with a Gripper

    Get PDF
    Abstract. In this paper, movement speed models of a robotic manipulator are presented according to the mode of operation of the human hand, when it wants to grasp and release an object. In order to develop the models, measurements on a human agent were required regarding the movement coordinates of his hand. The movement patterns have been approximated on the intervals, using first and second degree functions. The speeds were obtained by deriving these functions. The models obtained are generally presented; for their implementation in models applied for a certain robot, specific changes from case to case have to be made

    Human and robot arm control using the minimum variance principle

    Get PDF
    Many computational models of human upper limb movement successfully capture some features of human movement, but often lack a compelling biological basis. One that provides such a basis is Harris and Wolpert’s minimum variance model. In this model, the variance of the hand at the end of a movement is minimised, given that the controlling signal is subject to random noise with zero mean and standard deviation proportional to the signal’s amplitude. This criterion offers a consistent explanation for several movement characteristics. This work formulates the minimum variance model into a form suitable for controlling a robot arm. This implementation allows examination of the model properties, specifically its applicability to producing human-like movement. The model is subsequently tested in areas important to studies of human movement and robotics, including reaching, grasping, and action perception. For reaching, experiments show this formulation successfully captures the characteristics of movement, supporting previous results. Reaching is initially performed between two points, but complex trajectories are also investigated through the inclusion of via- points. The addition of a gripper extends the model, allowing production of trajectories for grasping an object. Using the minimum variance principle to derive digit trajectories, a quantitative explanation for the approach of digits to the object surface is provided. These trajectories also exhibit human-like spatial and temporal coordination between hand transport and grip aperture. The model’s predictive ability is further tested in the perception of human demonstrated actions. Through integration with a system that performs perception using its motor system offline, in line with the motor theory of perception, the model is shown to correlate well with data on human perception of movement. These experiments investigate and extend the explanatory and predictive use of the model for human movement, and demonstrate that it can be suitably formulated to produce human-like movement on robot arms.Open acces

    Learning-based robotic manipulation for dynamic object handling : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Mechatronic Engineering at the School of Food and Advanced Technology, Massey University, Turitea Campus, Palmerston North, New Zealand

    Get PDF
    Figures are re-used in this thesis with permission of their respective publishers or under a Creative Commons licence.Recent trends have shown that the lifecycles and production volumes of modern products are shortening. Consequently, many manufacturers subject to frequent change prefer flexible and reconfigurable production systems. Such schemes are often achieved by means of manual assembly, as conventional automated systems are perceived as lacking flexibility. Production lines that incorporate human workers are particularly common within consumer electronics and small appliances. Artificial intelligence (AI) is a possible avenue to achieve smart robotic automation in this context. In this research it is argued that a robust, autonomous object handling process plays a crucial role in future manufacturing systems that incorporate robotics—key to further closing the gap between manual and fully automated production. Novel object grasping is a difficult task, confounded by many factors including object geometry, weight distribution, friction coefficients and deformation characteristics. Sensing and actuation accuracy can also significantly impact manipulation quality. Another challenge is understanding the relationship between these factors, a specific grasping strategy, the robotic arm and the employed end-effector. Manipulation has been a central research topic within robotics for many years. Some works focus on design, i.e. specifying a gripper-object interface such that the effects of imprecise gripper placement and other confounding control-related factors are mitigated. Many universal robotic gripper designs have been considered, including 3-fingered gripper designs, anthropomorphic grippers, granular jamming end-effectors and underactuated mechanisms. While such approaches have maintained some interest, contemporary works predominantly utilise machine learning in conjunction with imaging technologies and generic force-closure end-effectors. Neural networks that utilise supervised and unsupervised learning schemes with an RGB or RGB-D input make up the bulk of publications within this field. Though many solutions have been studied, automatically generating a robust grasp configuration for objects not known a priori, remains an open-ended problem. An element of this issue relates to a lack of objective performance metrics to quantify the effectiveness of a solution—which has traditionally driven the direction of community focus by highlighting gaps in the state-of-the-art. This research employs monocular vision and deep learning to generate—and select from—a set of hypothesis grasps. A significant portion of this research relates to the process by which a final grasp is selected. Grasp synthesis is achieved by sampling the workspace using convolutional neural networks trained to recognise prospective grasp areas. Each potential pose is evaluated by the proposed method in conjunction with other input modalities—such as load-cells and an alternate perspective. To overcome human bias and build upon traditional metrics, scores are established to objectively quantify the quality of an executed grasp trial. Learning frameworks that aim to maximise for these scores are employed in the selection process to improve performance. The proposed methodology and associated metrics are empirically evaluated. A physical prototype system was constructed, employing a Dobot Magician robotic manipulator, vision enclosure, imaging system, conveyor, sensing unit and control system. Over 4,000 trials were conducted utilising 100 objects. Experimentation showed that robotic manipulation quality could be improved by 10.3% when selecting to optimise for the proposed metrics—quantified by a metric related to translational error. Trials further demonstrated a grasp success rate of 99.3% for known objects and 98.9% for objects for which a priori information is unavailable. For unknown objects, this equated to an improvement of approximately 10% relative to other similar methodologies in literature. A 5.3% reduction in grasp rate was observed when removing the metrics as selection criteria for the prototype system. The system operated at approximately 1 Hz when contemporary hardware was employed. Experimentation demonstrated that selecting a grasp pose based on the proposed metrics improved grasp rates by up to 4.6% for known objects and 2.5% for unknown objects—compared to selecting for grasp rate alone. This project was sponsored by the Richard and Mary Earle Technology Trust, the Ken and Elizabeth Powell Bursary and the Massey University Foundation. Without the financial support provided by these entities, it would not have been possible to construct the physical robotic system used for testing and experimentation. This research adds to the field of robotic manipulation, contributing to topics on grasp-induced error analysis, post-grasp error minimisation, grasp synthesis framework design and general grasp synthesis. Three journal publications and one IEEE Xplore paper have been published as a result of this research

    Object grasping and safe manipulation using friction-based sensing.

    Full text link
    This project provides a solution for slippage prevention in industrial robotic grippers for the purpose of safe object manipulation. Slippage sensing is performed using novel friction-based sensors, with customisable slippage sensitivity and complemented by an effective slippage prediction strategy. The outcome is a reliable and affordable slippage prevention technology
    corecore