1,844 research outputs found
Dexterous manipulation of unknown objects using virtual contact points
The manipulation of unknown objects is a problem of special interest in robotics since it is not always possible to have exact models of the objects with which the robot interacts. This paper presents a simple strategy to manipulate unknown objects using a robotic hand equipped with tactile sensors. The hand configurations that allow the rotation of an unknown object are computed using only tactile and kinematic information, obtained during the manipulation process and reasoning about the desired and real positions of the fingertips during the manipulation. This is done taking into account that the desired positions of the fingertips are not physically reachable since they are located in the interior of the manipulated object and therefore they are virtual positions with associated virtual contact points. The proposed approach was satisfactorily validated using three fingers of an anthropomorphic robotic hand (Allegro Hand), with the original fingertips replaced by tactile sensors (WTS-FT). In the experimental validation, several everyday objects with different shapes were successfully manipulated, rotating them without the need of knowing their shape or any other physical property.Peer ReviewedPostprint (author's final draft
Dexterous In-Hand Manipulation of Slender Cylindrical Objects through Deep Reinforcement Learning with Tactile Sensing
Continuous in-hand manipulation is an important physical interaction skill,
where tactile sensing provides indispensable contact information to enable
dexterous manipulation of small objects. This work proposed a framework for
end-to-end policy learning with tactile feedback and sim-to-real transfer,
which achieved fine in-hand manipulation that controls the pose of a thin
cylindrical object, such as a long stick, to track various continuous
trajectories through multiple contacts of three fingertips of a dexterous robot
hand with tactile sensor arrays. We estimated the central contact position
between the stick and each fingertip from the high-dimensional tactile
information and showed that the learned policies achieved effective
manipulation performance with the processed tactile feedback. The policies were
trained with deep reinforcement learning in simulation and successfully
transferred to real-world experiments, using coordinated model calibration and
domain randomization. We evaluated the effectiveness of tactile information via
comparative studies and validated the sim-to-real performance through
real-world experiments.Comment: 10 pages, 12 figures, submitted to Transaction on Mechatronic
Recommended from our members
A system for programming and controlling a multisensor robotic hand
A system for programming and controlling a multisensor robotic hand (Utah-MIT Hand) is described. Using this system, a number of autonomous tasks that are easily programmed and include combinations of hand-arm actuation with force, position, and tactile sensing have been implemented. The system is controlled at the software level by a programming language DIAL that provides an easy method for expressing the parallel operation of robotic devices. It also provides a convenient way to implement task-level scripts that can then be bound to particular sensors, actuators, and methods for accomplishing a generic grasping or manipulation task. Experiments using the system to pick up and pour from a pitcher, unscrew a lightbulb, and explore planar surfaces are presented
Ground Robotic Hand Applications for the Space Program study (GRASP)
This document reports on a NASA-STDP effort to address research interests of the NASA Kennedy Space Center (KSC) through a study entitled, Ground Robotic-Hand Applications for the Space Program (GRASP). The primary objective of the GRASP study was to identify beneficial applications of specialized end-effectors and robotic hand devices for automating any ground operations which are performed at the Kennedy Space Center. Thus, operations for expendable vehicles, the Space Shuttle and its components, and all payloads were included in the study. Typical benefits of automating operations, or augmenting human operators performing physical tasks, include: reduced costs; enhanced safety and reliability; and reduced processing turnaround time
Recommended from our members
An Integrated System for Dextrous Manipulation
This paper describes an integrated system for dextrous manipulation using a Utah-MIT hand that allows one to look at the higher levels of control in a number of grasping and manipulation tasks. The system consists of a number of low-level system primitives for grasping, integrated hand and robotic arm movement, tactile sensors mounted on the fingertips, sensing primitives to utilize joint position, tendon force and tactile array feedback, and a high-level programming environment that allows task level scripts to be created for grasping and manipulation tasks. A number of grasping and manipulation tasks are described that have been implemented with this system
A survey of dextrous manipulation
technical reportThe development of mechanical end effectors capable of dextrous manipulation is a rapidly growing and quite successful field of research. It has in some sense put the focus on control issues, in particular, how to control these remarkably humanlike manipulators to perform the deft movement that we take for granted in the human hand. The kinematic and control issues surrounding manipulation research are clouded by more basic concerns such as: what is the goal of a manipulation system, is the anthropomorphic or functional design methodology appropriate, and to what degree does the control of the manipulator depend on other sensory systems. This paper examines the potential of creating a general purpose, anthropomorphically motivated, dextrous manipulation system. The discussion will focus on features of the human hand that permit its general usefulness as a manipulator. A survey of machinery designed to emulate these capabilities is presented. Finally, the tasks of grasping and manipulation are examined from the control standpoint to suggest a control paradigm which is descriptive, yet flexible and computationally efficient1
Optical Proximity Sensing for Pose Estimation During In-Hand Manipulation
During in-hand manipulation, robots must be able to continuously estimate the
pose of the object in order to generate appropriate control actions. The
performance of algorithms for pose estimation hinges on the robot's sensors
being able to detect discriminative geometric object features, but previous
sensing modalities are unable to make such measurements robustly. The robot's
fingers can occlude the view of environment- or robot-mounted image sensors,
and tactile sensors can only measure at the local areas of contact. Motivated
by fingertip-embedded proximity sensors' robustness to occlusion and ability to
measure beyond the local areas of contact, we present the first evaluation of
proximity sensor based pose estimation for in-hand manipulation. We develop a
novel two-fingered hand with fingertip-embedded optical time-of-flight
proximity sensors as a testbed for pose estimation during planar in-hand
manipulation. Here, the in-hand manipulation task consists of the robot moving
a cylindrical object from one end of its workspace to the other. We
demonstrate, with statistical significance, that proximity-sensor based pose
estimation via particle filtering during in-hand manipulation: a) exhibits 50%
lower average pose error than a tactile-sensor based baseline; b) empowers a
model predictive controller to achieve 30% lower final positioning error
compared to when using tactile-sensor based pose estimates.Comment: 8 pages, 6 figure
- …