16 research outputs found
Recommended from our members
Haptic object recognition using a multi-fingered dextrous hand
The use of a dextrous, multifingered hand for high-level object recognition tasks is considered. The paradigm is model-based recognition in which the objects are modeled and recovered as superquadratics, which are shown to have a number of important attributes that make them well suited for such a task. Experiments have been performed to recover the shape of objects using sparse contacts point data from the hand with promising results. The authors also propose an approach to using tactile data in conjunction with the dextrous hand to build a library of grasping and exploration primitives that can be used in recognizing and grasping more complex multipart objects
Mapping haptic exploratory procedures to multiple shape representations
Research in human haptics has revealed a number of exploratory procedures (EPs) that are used in determining attributes on an object, particularly shape. This research has been used as a paradigm for building an intelligent robotic system that can perform shape recognition from touch sensing. In particular, a number of mappings between EPs and shape modeling primitives have been found. The choice of shape primitive for each EP is discussed, and results from experiments with a Utah-MIT dextrous hand system are presented. A vision algorithm to complement active touch sensing for the task of autonomous shape recovery is also presented
Recommended from our members
An Integrated System for Dextrous Manipulation
This paper describes an integrated system for dextrous manipulation using a Utah-MIT hand that allows one to look at the higher levels of control in a number of grasping and manipulation tasks. The system consists of a number of low-level system primitives for grasping, integrated hand and robotic arm movement, tactile sensors mounted on the fingertips, sensing primitives to utilize joint position, tendon force and tactile array feedback, and a high-level programming environment that allows task level scripts to be created for grasping and manipulation tasks. A number of grasping and manipulation tasks are described that have been implemented with this system
Recommended from our members
Acquisition and Interpretation of 3-D Sensor Data from Touch
Acquisition of 3-D scene information has focused on either passive 2-D imaging methods (stereopsis, structure from motion etc.) or 3-D range sensing methods (structured lighting, laser scanning etc.). Little work has been done in using active touch sensing with a multi-fingered robotic hand to acquire scene descriptions, even though it is a well developed human capability. Touch sensing differs from other more passive sensing modalities such as vision in a number of ways. A multi-fingered robotic hand with touch sensors can probe, move, and change its environment. This imposes a level of control on the sensing that makes it typically more difficult than traditional passive sensors in which active control is not an issue. Secondly, touch sensing generates far less data than vision methods; this is especially intriguing in light of psychological evidence that shows humans can recover shape and a number of other object attributes very reliably using touch alone. Future robotic systems will need to use dextrous robotic hands for tasks such as grasping, manipulation, assembly, inspection and object recognition. This paper describes our use of touch sensing as part of a larger system we are building for 3-D shape recovery and object recognition using touch and vision methods. It focuses on three exploratory procedures we have built to acquire and interpret sparse 3-D touch data: grasping by containment, planar surface exploration and surface contour exploration. Experimental results for each of these procedures are presented
Recommended from our members
Haptic Perception with a Robot Hand: Requirements and Realization
This paper first discusses briefly some of the recent ideas of perceptual psychology on the human haptic system particularly those of J.J. Gibson and Klatzky and Lederman. Following this introduction, we present some of the requirements of robotic haptic sensing and the results of experiments using a Utah/MIT dexterous robot hand to derive geometric object information using active sensing
Novel Tactile-SIFT Descriptor for Object Shape Recognition
Using a tactile array sensor to recognize an object often requires multiple touches at different positions. This process is prone to move or rotate the object, which inevitably increases difficulty in object recognition. To cope with the unknown object movement, this paper proposes a new tactile-SIFT descriptor to extract features in view of gradients in the tactile image to represent objects, to allow the features being invariant to object translation and rotation. The tactile-SIFT segments a tactile image into overlapping subpatches, each of which is represented using a dn-dimensional gradient vector, similar to the classic SIFT descriptor. Tactile-SIFT descriptors obtained from multiple touches form a dictionary of k words, and the bag-of-words method is then used to identify objects. The proposed method has been validated by classifying 18 real objects with data from an off-the-shelf tactile sensor. The parameters of the tactile-SIFT descriptor, including the dimension size dn and the number of subpatches sp, are studied. It is found that the optimal performance is obtained using an 8-D descriptor with three subpatches, taking both the classification accuracy and time efficiency into consideration. By employing tactile-SIFT, a recognition rate of 91.33% has been achieved with a dictionary size of 50 clusters using only 15 touches
Recommended from our members
A system for programming and controlling a multisensor robotic hand
A system for programming and controlling a multisensor robotic hand (Utah-MIT Hand) is described. Using this system, a number of autonomous tasks that are easily programmed and include combinations of hand-arm actuation with force, position, and tactile sensing have been implemented. The system is controlled at the software level by a programming language DIAL that provides an easy method for expressing the parallel operation of robotic devices. It also provides a convenient way to implement task-level scripts that can then be bound to particular sensors, actuators, and methods for accomplishing a generic grasping or manipulation task. Experiments using the system to pick up and pour from a pitcher, unscrew a lightbulb, and explore planar surfaces are presented
iCLAP: Shape Recognition by Combining Proprioception and Touch Sensing
The work presented in this paper was partially supported by the Engineering and Physical Sciences Council (EPSRC) Grant (Ref: EP/N020421/1) and the King’s-China Scholarship Council Ph.D. scholarship
Manipulation primitives: A paradigm for abstraction and execution of grasping and manipulation tasks
Sensor-based reactive and hybrid approaches have proven a promising line of study to address imperfect knowledge in grasping and manipulation. However the reactive approaches are usually tightly coupled to a particular embodiment making transfer of knowledge difficult.
This paper proposes a paradigm for modeling and execution of reactive manipulation actions, which makes knowledge transfer to different embodiments possible while retaining the reactive capabilities of the embodiments. The proposed approach extends the idea of control primitives coordinated by a state machine by introducing an embodiment independent layer of abstraction. Abstract manipulation primitives constitute a vocabulary of atomic, embodiment independent actions, which can be coordinated using state machines to describe complex actions. To obtain embodiment specific models, the abstract state machines are automatically translated to embodiment specific models, such that full capabilities of each platform can be utilized.
The strength of the manipulation primitives paradigm is demonstrated by developing a set of corresponding embodiment specific primitives for object transport, including a complex reactive grasping primitive. The robustness of the approach is experimentally studied in emptying of a box filled with several unknown objects. The embodiment independence is studied by performing a manipulation task on two different platforms using the same abstract description
Recommended from our members
An algorithm for segmenting range imagery
This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory