47 research outputs found

    Haptic Exploration of Unknown Objects for Robust in-hand Manipulation.

    Get PDF
    Human-like robot hands provide the flexibility to manipulate a variety of objects that are found in unstructured environments. Knowledge of object properties and motion trajectory is required, but often not available in real-world manipulation tasks. Although it is possible to grasp and manipulate unknown objects, an uninformed grasp leads to inferior stability, accuracy, and repeatability of the manipulation. Therefore, a central challenge of in-hand manipulation in unstructured environments is to acquire this information safely and efficiently. We propose an in-hand manipulation framework that does not assume any prior information about the object and the motion, but instead extracts the object properties through a novel haptic exploration procedure and learns the motion from demonstration using dynamical movement primitives. We evaluate our approach by unknown object manipulation experiments using a human-like robot hand. The results show that haptic exploration improves the manipulation robustness and accuracy significantly, compared to the virtual spring framework baseline method that is widely used for grasping unknown objects

    Learning by Demonstration and Robust Control of Dexterous In-Hand Robotic Manipulation Skills

    Get PDF
    Dexterous robotic manipulation of unknown objects can open the way to novel tasks and applications of robots in semi-structured and unstructured settings, from advanced industrial manufacturing to exploration of harsh environments. However, it is challenging for at least three reasons: the desired motion of the object might be too complex to be described analytically, precise models of the manipulated objects are not available, the controller should simultaneously ensure both a robust grasp and an effective in-hand motion. To solve these issues we propose to learn in-hand robotic manipulation tasks from human demonstrations, using Dynamical Movement Primitives (DMPs), and to reproduce them with a robust compliant controller based on the Virtual Springs Framework (VSF), that employs real-time feedback of the contact forces measured on the robot fingertips. With this solution, the generalization capabilities of DMPs can be transferred successfully to the dexterous in-hand manipulation problem: we demonstrate this by presenting real-world experiments of in-hand translation and rotation of unknown objects

    Comparing Single Touch to Dynamic Exploratory Procedures for Robotic Tactile Object Recognition

    Get PDF

    Grasp Stability Prediction for a Dexterous Robotic Hand Combining Depth Vision and Haptic Bayesian Exploration.

    Get PDF
    Grasp stability prediction of unknown objects is crucial to enable autonomous robotic manipulation in an unstructured environment. Even if prior information about the object is available, real-time local exploration might be necessary to mitigate object modelling inaccuracies. This paper presents an approach to predict safe grasps of unknown objects using depth vision and a dexterous robot hand equipped with tactile feedback. Our approach does not assume any prior knowledge about the objects. First, an object pose estimation is obtained from RGB-D sensing; then, the object is explored haptically to maximise a given grasp metric. We compare two probabilistic methods (i.e. standard and unscented Bayesian Optimisation) against random exploration (i.e. uniform grid search). Our experimental results demonstrate that these probabilistic methods can provide confident predictions after a limited number of exploratory observations, and that unscented Bayesian Optimisation can find safer grasps, taking into account the uncertainty in robot sensing and grasp execution

    Grasping Robot Integration and Prototyping: The GRIP Software Framework

    Get PDF

    Speakers Raise their Hands and Head during Self-Repairs in Dyadic Conversations

    Get PDF
    People often encounter difficulties in building shared understanding during everyday conversation. The most common symptom of these difficulties are self-repairs, when a speaker restarts, edits or amends their utterances mid-turn. Previous work has focused on the verbal signals of self-repair, i.e. speech disfluences (filled pauses, truncated words and phrases, word substitutions or reformulations), and computational tools now exist that can automatically detect these verbal phenomena. However, face-to-face conversation also exploits rich non-verbal resources and previous research suggests that self-repairs are associated with distinct hand movement patterns. This paper extends those results by exploring head and hand movements of both speakers and listeners using two motion parameters: height (vertical position) and 3D velocity. The results show that speech sequences containing self-repairs are distinguishable from fluent ones: speakers raise their hands and head more (and move more rapidly) during self-repairs. We obtain these results by analysing data from a corpus of 13 unscripted dialogues, and we discuss how these findings could support the creation of improved cognitive artificial systems for natural human-machine and human-robot interaction

    A Soft Tactile Sensor Based on Magnetics and Hybrid Flexible-Rigid Electronics

    Get PDF
    Tactile sensing is crucial for robots to manipulate objects successfully. However, integrating tactile sensors into robotic hands is still challenging, mainly due to the need to cover small multi-curved surfaces with several components that must be miniaturized. In this paper, we report the design of a novel magnetic-based tactile sensor to be integrated into the robotic hand of the humanoid robot Vizzy. We designed and fabricated a flexible 4 × 2 matrix of Si chips of magnetoresistive spin valve sensors that, coupled with a single small magnet, can measure contact forces from 0.1 to 5 N on multiple locations over the surface of a robotic fingertip; this design is innovative with respect to previous works in the literature, and it is made possible by careful engineering and miniaturization of the custom-made electronic components that we employ. In addition, we characterize the behavior of the sensor through a COMSOL simulation, which can be used to generate optimized designs for sensors with different geometries

    Learning Deep Features for Robotic Inference from Physical Interactions

    Get PDF
    In order to effectively handle multiple tasks that are not pre-defined, a robotic agent needs to automatically map its high-dimensional sensory inputs into useful features. As a solution, feature learning has empirically shown substantial improvements in obtaining representations that are generalizable to different tasks, compared to feature engineering approaches, but it requires a large amount of data and computational capacity. These challenges are specifically relevant in robotics due to the low signal-to-noise ratios inherent to robotic data, and to the cost typically associated with collecting this type of input. In this paper, we propose a deep probabilistic method based on Convolutional Variational Auto-Encoders (CVAEs) to learn visual features suitable for interaction and recognition tasks. We run our experiments on a self-supervised robotic sensorimotor dataset. Our data was acquired with the iCub humanoid and is based on a standard object collection, thus being readily extensible. We evaluated the learned features in terms of usability for 1) object recognition, 2) capturing the statistics of the effects, and 3) planning. In addition, where applicable, we compared the performance of the proposed architecture with other state-ofthe-art models. These experiments demonstrate that our model is capable of capturing the functional statistics of action and perception (i.e. images) which performs better than existing baselines, without requiring millions of samples or any handengineered features

    Benchmarking the Grasping Capabilities of the iCub Hand with the YCB Object and Model Set

    Get PDF
    © 2016 IEEE. The letter reports an evaluation of the iCub grasping capabilities, performed using the YCB Object and Model Set. The goal is to understand what kind of objects the iCub dexterous hand can grasp, and with what degree of robustness and flexibility, given the best possible control strategy. Therefore, the robot fingers are directly controlled by a human expert using a dataglove: in other words, the human brain is employed as the best possible controller. Through this technique, we provide a baseline for researchers who want to evaluate the performance of their grasping controller. By using a widespread robotic platform and a publicly available set of objects, we believe that many researchers can directly benefit from this resource; moreover, what we propose is a general methodology for benchmarking of grasping and manipulation that can be applied to any dexterous robotic hand

    Affordances in Psychology, Neuroscience, and Robotics: A Survey

    Get PDF
    The concept of affordances appeared in psychology during the late 60s as an alternative perspective on the visual perception of the environment. It was revolutionary in the intuition that the way living beings perceive the world is deeply influenced by the actions they are able to perform. Then, across the last 40 years, it has influenced many applied fields, e.g., design, human-computer interaction, computer vision, and robotics. In this paper, we offer a multidisciplinary perspective on the notion of affordances. We first discuss the main definitions and formalizations of the affordance theory, then we report the most significant evidence in psychology and neuroscience that support it, and finally we review the most relevant applications of this concept in robotics
    corecore