48,892 research outputs found
Recommended from our members
Acquisition and Interpretation of 3-D Sensor Data from Touch
Acquisition of 3-D scene information has focused on either passive 2-D imaging methods (stereopsis, structure from motion etc.) or 3-D range sensing methods (structured lighting, laser scanning etc.). Little work has been done in using active touch sensing with a multi-fingered robotic hand to acquire scene descriptions, even though it is a well developed human capability. Touch sensing differs from other more passive sensing modalities such as vision in a number of ways. A multi-fingered robotic hand with touch sensors can probe, move, and change its environment. This imposes a level of control on the sensing that makes it typically more difficult than traditional passive sensors in which active control is not an issue. Secondly, touch sensing generates far less data than vision methods; this is especially intriguing in light of psychological evidence that shows humans can recover shape and a number of other object attributes very reliably using touch alone. Future robotic systems will need to use dextrous robotic hands for tasks such as grasping, manipulation, assembly, inspection and object recognition. This paper describes our use of touch sensing as part of a larger system we are building for 3-D shape recovery and object recognition using touch and vision methods. It focuses on three exploratory procedures we have built to acquire and interpret sparse 3-D touch data: grasping by containment, planar surface exploration and surface contour exploration. Experimental results for each of these procedures are presented
More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
For humans, the process of grasping an object relies heavily on rich tactile
feedback. Most recent robotic grasping work, however, has been based only on
visual input, and thus cannot easily benefit from feedback after initiating
contact. In this paper, we investigate how a robot can learn to use tactile
information to iteratively and efficiently adjust its grasp. To this end, we
propose an end-to-end action-conditional model that learns regrasping policies
from raw visuo-tactile data. This model -- a deep, multimodal convolutional
network -- predicts the outcome of a candidate grasp adjustment, and then
executes a grasp by iteratively selecting the most promising actions. Our
approach requires neither calibration of the tactile sensors, nor any
analytical modeling of contact forces, thus reducing the engineering effort
required to obtain efficient grasping policies. We train our model with data
from about 6,450 grasping trials on a two-finger gripper equipped with GelSight
high-resolution tactile sensors on each finger. Across extensive experiments,
our approach outperforms a variety of baselines at (i) estimating grasp
adjustment outcomes, (ii) selecting efficient grasp adjustments for quick
grasping, and (iii) reducing the amount of force applied at the fingers, while
maintaining competitive performance. Finally, we study the choices made by our
model and show that it has successfully acquired useful and interpretable
grasping behaviors.Comment: 8 pages. Published on IEEE Robotics and Automation Letters (RAL).
Website: https://sites.google.com/view/more-than-a-feelin
Proximity sensor for thin wire recognition and manipulation
In robotic grasping and manipulation, the knowledge of a precise object pose represents a key issue. The point acquires even more importance when the objects and, then, the grasping areas become smaller. This is the case of Deformable Linear Object manipulation application where the robot shall autonomously work with thin wires which pose and shape estimation could become difficult given the limited object size and possible occlusion conditions. In such applications, a vision-based system could not be enough to obtain accurate pose and shape estimation. In this work the authors propose a Time-of-Flight pre-touch sensor, integrated with a previously designed tactile sensor, for an accurate estimation of thin wire pose and shape. The paper presents the design and the characterization of the proposed sensor. Moreover, a specific object scanning and shape detection algorithm is presented. Experimental results support the proposed methodology, showing good performance. Hardware design and software applications are freely accessible to the reader
Editorial: Perceiving and Acting in the real world: from neural activity to behavior
The interaction between perception and action represents one of the pillars of human evolutionary success. Our interactions with the surrounding world involve a variety of behaviors, almost always including movements of the eyes and hands. Such actions rely on neural mechanisms that must process an enormous amount of information in order to generate appropriate motor commands. Yet, compared to the great advancements in the field of perception for cognition, the neural underpinnings of how we control our movements, as well as the interactions between perception and motor control, remain elusive. With this research topic we provide a framework for: 1) the perception of real objects and shapes using visual and haptic information, 2) the reference frames for action and perception, and 3) how perceived target properties are translated into goal-directed actions and object manipulation. The studies in this special issue employ a variety of methodologies that include behavioural kinematics, neuroimaging, transcranial magnetic stimulation and patient cases. Here we provide a brief summary and commentary on the articles included in this research topic
- …