18 research outputs found

    Benchmarking the Grasping Capabilities of the iCub Hand with the YCB Object and Model Set

    Get PDF
    © 2016 IEEE. The letter reports an evaluation of the iCub grasping capabilities, performed using the YCB Object and Model Set. The goal is to understand what kind of objects the iCub dexterous hand can grasp, and with what degree of robustness and flexibility, given the best possible control strategy. Therefore, the robot fingers are directly controlled by a human expert using a dataglove: in other words, the human brain is employed as the best possible controller. Through this technique, we provide a baseline for researchers who want to evaluate the performance of their grasping controller. By using a widespread robotic platform and a publicly available set of objects, we believe that many researchers can directly benefit from this resource; moreover, what we propose is a general methodology for benchmarking of grasping and manipulation that can be applied to any dexterous robotic hand

    The Anthropomorphic Hand Assessment Protocol (AHAP)

    Get PDF
    The progress in the development of anthropomorphic hands for robotic and prosthetic applications has not been followed by a parallel development of objective methods to evaluate their performance. The need for benchmarking in grasping research has been recognized by the robotics community as an important topic. In this study we present the Anthropomorphic Hand Assessment Protocol (AHAP) to address this need by providing a measure for quantifying the grasping ability of artificial hands and comparing hand designs. To this end, the AHAP uses 25 objects from the publicly available Yale-CMU-Berkeley Object and Model Set thereby enabling replicability. It is composed of 26 postures/tasks involving grasping with the eight most relevant human grasp types and two non-grasping postures. The AHAP allows to quantify the anthropomorphism and functionality of artificial hands through a numerical Grasping Ability Score (GAS). The AHAP was tested with different hands, the first version of the hand of the humanoid robot ARMAR-6 with three different configurations resulting from attachment of pads to fingertips and palm as well as the two versions of the KIT Prosthetic Hand. The benchmark was used to demonstrate the improvements of these hands in aspects like the grasping surface, the grasp force and the finger kinematics. The reliability, consistency and responsiveness of the benchmark have been statistically analyzed, indicating that the AHAP is a powerful tool for evaluating and comparing different artificial hand designs

    Markerless visual servoing on unknown objects for humanoid robot platforms

    Full text link
    To precisely reach for an object with a humanoid robot, it is of central importance to have good knowledge of both end-effector, object pose and shape. In this work we propose a framework for markerless visual servoing on unknown objects, which is divided in four main parts: I) a least-squares minimization problem is formulated to find the volume of the object graspable by the robot's hand using its stereo vision; II) a recursive Bayesian filtering technique, based on Sequential Monte Carlo (SMC) filtering, estimates the 6D pose (position and orientation) of the robot's end-effector without the use of markers; III) a nonlinear constrained optimization problem is formulated to compute the desired graspable pose about the object; IV) an image-based visual servo control commands the robot's end-effector toward the desired pose. We demonstrate effectiveness and robustness of our approach with extensive experiments on the iCub humanoid robot platform, achieving real-time computation, smooth trajectories and sub-pixel precisions

    Hybrid Architectures for Object Pose and Velocity Tracking at the Intersection of Kalman Filtering and Machine Learning

    Get PDF
    The study of object perception algorithms is fundamental for the development of robotic platforms capable of planning and executing actions involving objects with high precision, reliability and safety. Indeed, this topic has been vastly explored in both the robotic and computer vision research communities using diverse techniques, ranging from classical Bayesian filtering to more modern Machine Learning techniques, and complementary sensing modalities such as vision and touch. Recently, the ever-growing availability of tools for synthetic data generation has substantially increased the adoption of Deep Learning for both 2D tasks, as object detection and segmentation, and 6D tasks, such as object pose estimation and tracking. The proposed methods exhibit interesting performance on computer vision benchmarks and robotic tasks, e.g. using object pose estimation for grasp planning purposes. Nonetheless, they generally do not consider useful information connected with the physics of the object motion and the peculiarities and requirements of robotic systems. Examples are the necessity to provide well-behaved output signals for robot motion control, the possibility to integrate modelling priors on the motion of the object and algorithmic priors. These help exploit the temporal correlation of the object poses, handle the pose uncertainties and mitigate the effect of outliers. Most of these concepts are considered in classical approaches, e.g. from the Bayesian and Kalman filtering literature, which however are not as powerful as Deep Learning in handling visual data. As a consequence, the development of hybrid architectures that combine the best features from both worlds is particularly appealing in a robotic setting. Motivated by these considerations, in this Thesis, I aimed at devising hybrid architectures for object perception, focusing on the task of object pose and velocity tracking. The proposed architectures use Kalman filtering supported by state-of-the-art Deep Neural Networks to track the 6D pose and velocity of objects from images. The devised solutions exhibit state-of-the-art performance, increased modularity and do not require training to implement the actual tracking behaviors. Furthermore, they can track even fast object motions despite the possible non-negligible inference times of the adopted neural networks. Also, by relying on data-driven Kalman filtering, I explored a paradigm that enables to track the state of systems that cannot be easily modeled analytically. Specifically, I used this approach to learn the measurement model of soft 3D tactile sensors and address the problem of tracking the sliding motion of hand-held objects

    Learning Deep Features for Robotic Inference from Physical Interactions

    Get PDF
    In order to effectively handle multiple tasks that are not pre-defined, a robotic agent needs to automatically map its high-dimensional sensory inputs into useful features. As a solution, feature learning has empirically shown substantial improvements in obtaining representations that are generalizable to different tasks, compared to feature engineering approaches, but it requires a large amount of data and computational capacity. These challenges are specifically relevant in robotics due to the low signal-to-noise ratios inherent to robotic data, and to the cost typically associated with collecting this type of input. In this paper, we propose a deep probabilistic method based on Convolutional Variational Auto-Encoders (CVAEs) to learn visual features suitable for interaction and recognition tasks. We run our experiments on a self-supervised robotic sensorimotor dataset. Our data was acquired with the iCub humanoid and is based on a standard object collection, thus being readily extensible. We evaluated the learned features in terms of usability for 1) object recognition, 2) capturing the statistics of the effects, and 3) planning. In addition, where applicable, we compared the performance of the proposed architecture with other state-ofthe-art models. These experiments demonstrate that our model is capable of capturing the functional statistics of action and perception (i.e. images) which performs better than existing baselines, without requiring millions of samples or any handengineered features
    corecore