22,799 research outputs found

    Improving grasping forces during the manipulation of unknown objects

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksMany of the solutions proposed for the object manipulation problem are based on the knowledge of the object features. The approach proposed in this paper intends to provide a simple geometrical approach to securely manipulate an unknown object based only on tactile and kinematic information. The tactile and kinematic data obtained during the manipulation is used to recognize the object shape (at least the local object curvature), allowing to improve the grasping forces when this information is added to the manipulation strategy. The approach has been fully implemented and tested using the Schunk Dexterous Hand (SDH2). Experimental results are shown to illustrate the efficiency of the approach.Peer ReviewedPostprint (author's final draft

    Dexterous manipulation of unknown objects using virtual contact points

    Get PDF
    The manipulation of unknown objects is a problem of special interest in robotics since it is not always possible to have exact models of the objects with which the robot interacts. This paper presents a simple strategy to manipulate unknown objects using a robotic hand equipped with tactile sensors. The hand configurations that allow the rotation of an unknown object are computed using only tactile and kinematic information, obtained during the manipulation process and reasoning about the desired and real positions of the fingertips during the manipulation. This is done taking into account that the desired positions of the fingertips are not physically reachable since they are located in the interior of the manipulated object and therefore they are virtual positions with associated virtual contact points. The proposed approach was satisfactorily validated using three fingers of an anthropomorphic robotic hand (Allegro Hand), with the original fingertips replaced by tactile sensors (WTS-FT). In the experimental validation, several everyday objects with different shapes were successfully manipulated, rotating them without the need of knowing their shape or any other physical property.Peer ReviewedPostprint (author's final draft

    Manipulation primitives: A paradigm for abstraction and execution of grasping and manipulation tasks

    Get PDF
    Sensor-based reactive and hybrid approaches have proven a promising line of study to address imperfect knowledge in grasping and manipulation. However the reactive approaches are usually tightly coupled to a particular embodiment making transfer of knowledge difficult. This paper proposes a paradigm for modeling and execution of reactive manipulation actions, which makes knowledge transfer to different embodiments possible while retaining the reactive capabilities of the embodiments. The proposed approach extends the idea of control primitives coordinated by a state machine by introducing an embodiment independent layer of abstraction. Abstract manipulation primitives constitute a vocabulary of atomic, embodiment independent actions, which can be coordinated using state machines to describe complex actions. To obtain embodiment specific models, the abstract state machines are automatically translated to embodiment specific models, such that full capabilities of each platform can be utilized. The strength of the manipulation primitives paradigm is demonstrated by developing a set of corresponding embodiment specific primitives for object transport, including a complex reactive grasping primitive. The robustness of the approach is experimentally studied in emptying of a box filled with several unknown objects. The embodiment independence is studied by performing a manipulation task on two different platforms using the same abstract description

    Manipulation of unknown objects to improve the grasp quality using tactile information

    Get PDF
    This work presents a novel and simple approach in the area of manipulation of unknown objects considering both geometric and mechanical constraints of the robotic hand. Starting with an initial blind grasp, our method improves the grasp quality through manipulation considering the three common goals of the manipulation process: improving the hand configuration, the grasp quality and the object positioning, and, at the same time, prevents the object from falling. Tactile feedback is used to obtain local information of the contacts between the fingertips and the object, and no additional exteroceptive feedback sources are considered in the approach. The main novelty of this work lies in the fact that the grasp optimization is performed on-line as a reactive procedure using the tactile and kinematic information obtained during the manipulation. Experimental results are shown to illustrate the efficiency of the approachPeer ReviewedPostprint (published version

    Improved GelSight Tactile Sensor for Measuring Geometry and Slip

    Full text link
    A GelSight sensor uses an elastomeric slab covered with a reflective membrane to measure tactile signals. It measures the 3D geometry and contact force information with high spacial resolution, and successfully helped many challenging robot tasks. A previous sensor, based on a semi-specular membrane, produces high resolution but with limited geometry accuracy. In this paper, we describe a new design of GelSight for robot gripper, using a Lambertian membrane and new illumination system, which gives greatly improved geometric accuracy while retaining the compact size. We demonstrate its use in measuring surface normals and reconstructing height maps using photometric stereo. We also use it for the task of slip detection, using a combination of information about relative motions on the membrane surface and the shear distortions. Using a robotic arm and a set of 37 everyday objects with varied properties, we find that the sensor can detect translational and rotational slip in general cases, and can be used to improve the stability of the grasp.Comment: IEEE/RSJ International Conference on Intelligent Robots and System

    A new view on grasping

    Get PDF
    Reaching out for an object is often described as consisting of two components that are based on different visual information. Information about the object’s position and orientation guides the hand to the object, while information about the object’s shape and size determines how the fingers move relative to the thumb to grasp it. We propose an alternative description, which consists of determining suitable positions on the object — on the basis of its shape, surface roughness, and so on — and then moving one’s thumb and fingers more or less independently to these positions. We modelled this description using a minimum jerk approach, whereby the finger and thumb approach their respective target positions approximately orthogonally to the surface. Our model predicts how experimental variables such as object size, movement speed, fragility, and required accuracy will influence the timing and size of the maximum aperture of the hand. An extensive review of experimental studies on grasping showed that the predicted influences correspond to human behaviour

    Dual-Modal and Dual-Sensing-Mechanism (DMDSM) Acoustic Sensors for Robotic Ranging and Material Differentiation

    Get PDF
    One of the grand challenges in robotics is robust grasping of unknown objects. This is particularly important when robots expand its territory from industry floors to domestic service applications where the object prior knowledge is not often available. As a result, sensor-based grasping is more desirable. Ideally, with the assistance of object sensing, robotic fingers can respond to subtle changes in object pose right before grasping and adjust operations dynamically. Moreover, the object material and structure information can help planners better estimate the force distribution, impact characteristics and friction coefficients for a more robust grasping. However, current sensors have difficulties in satisfying these requirements. Tactile/force sensors may change object poses or even damage the object, which leads to slow or failed grasping. Non-contact long-distance sensors such as camera, LIDAR, radar, sonar suffer from occlusion or blind zones. Therefore, non-contact near-distance sensing is the optimal solution. Unfortunately, existing near-distance sensors based on optical, electric-field, and acoustic signals still cannot satisfy these grasping requirements. Electric-field sensors have difficulties in targets with low dielectric contrast to air. The optical ones lack lateral resolution and are not effective for optically-transparent or highly-reflective targets. Acoustic-based sensors could work on distance ranging and material/structure sensing, but fail on thin-film, porous, or sound-absorbing targets. To address these issues, a new finger-mounted non-contact dual-modal and dual-sensing-mechanism (DMDSM) sensor for near-distance ranging and material/structure differentiation is studied and developed, which is based on two modalities and sensing mechanisms: pulse-echo ultrasound (US) and optoacoustics (OA). In both modalities, the object distance is estimated from the Time-of-Flight (ToF) of the US/OA signal, whose frequency spectra are used to extract the distinctive features of the material/structure. The development of the DMDSM sensor is conducted as follows. First, the prototype of the DMDSM sensor is designed, fabricated, and characterized. Testing is conducted on conventional objects and optically and/or acoustically challenging targets (OACTs) to characterize its performance. Second, to simplify the DMDSM sensor design and operation, a single wideband ultrasound transmitter and receiver is investigated where both US and OA collection can be initiated by a single laser pulse. Third, to expand to areal mapping or imaging, a new self-focused US/OA transceiver and a flat scanning mirror are studied to steer laser and ultrasound beams over the target with customized patterns. At last, optically-transparent focused (OTF) ultrasound transducers are explored, which are helpful to miniaturize the DMDSM sensors while enhancing their performances
    corecore