998 research outputs found

    Localization and Manipulation of Small Parts Using GelSight Tactile Sensing

    Get PDF
    Robust manipulation and insertion of small parts can be challenging because of the small tolerances typically involved. The key to robust control of these kinds of manipulation interactions is accurate tracking and control of the parts involved. Typically, this is accomplished using visual servoing or force-based control. However, these approaches have drawbacks. Instead, we propose a new approach that uses tactile sensing to accurately localize the pose of a part grasped in the robot hand. Using a feature-based matching technique in conjunction with a newly developed tactile sensing technology known as GelSight that has much higher resolution than competing methods, we synthesize high-resolution height maps of object surfaces. As a result of these high-resolution tactile maps, we are able to localize small parts held in a robot hand very accurately. We quantify localization accuracy in benchtop experiments and experimentally demonstrate the practicality of the approach in the context of a small parts insertion problem.National Science Foundation (U.S.) (NSF Grant No. 1017862)United States. National Aeronautics and Space Administration (NASA under Grant No. NNX13AQ85G)United States. Office of Naval Research (ONR Grant No. N000141410047

    Tactile Mapping and Localization from High-Resolution Tactile Imprints

    Full text link
    This work studies the problem of shape reconstruction and object localization using a vision-based tactile sensor, GelSlim. The main contributions are the recovery of local shapes from contact, an approach to reconstruct the tactile shape of objects from tactile imprints, and an accurate method for object localization of previously reconstructed objects. The algorithms can be applied to a large variety of 3D objects and provide accurate tactile feedback for in-hand manipulation. Results show that by exploiting the dense tactile information we can reconstruct the shape of objects with high accuracy and do on-line object identification and localization, opening the door to reactive manipulation guided by tactile sensing. We provide videos and supplemental information in the project's website http://web.mit.edu/mcube/research/tactile_localization.html.Comment: ICRA 2019, 7 pages, 7 figures. Website: http://web.mit.edu/mcube/research/tactile_localization.html Video: https://youtu.be/uMkspjmDbq

    FingerSLAM: Closed-loop Unknown Object Localization and Reconstruction from Visuo-tactile Feedback

    Full text link
    In this paper, we address the problem of using visuo-tactile feedback for 6-DoF localization and 3D reconstruction of unknown in-hand objects. We propose FingerSLAM, a closed-loop factor graph-based pose estimator that combines local tactile sensing at finger-tip and global vision sensing from a wrist-mount camera. FingerSLAM is constructed with two constituent pose estimators: a multi-pass refined tactile-based pose estimator that captures movements from detailed local textures, and a single-pass vision-based pose estimator that predicts from a global view of the object. We also design a loop closure mechanism that actively matches current vision and tactile images to previously stored key-frames to reduce accumulated error. FingerSLAM incorporates the two sensing modalities of tactile and vision, as well as the loop closure mechanism with a factor graph-based optimization framework. Such a framework produces an optimized pose estimation solution that is more accurate than the standalone estimators. The estimated poses are then used to reconstruct the shape of the unknown object incrementally by stitching the local point clouds recovered from tactile images. We train our system on real-world data collected with 20 objects. We demonstrate reliable visuo-tactile pose estimation and shape reconstruction through quantitative and qualitative real-world evaluations on 6 objects that are unseen during training.Comment: Submitted and accepted to 2023 IEEE International Conference on Robotics and Automation (ICRA 2023

    3D Shape Perception from Monocular Vision, Touch, and Shape Priors

    Full text link
    Perceiving accurate 3D object shape is important for robots to interact with the physical world. Current research along this direction has been primarily relying on visual observations. Vision, however useful, has inherent limitations due to occlusions and the 2D-3D ambiguities, especially for perception with a monocular camera. In contrast, touch gets precise local shape information, though its efficiency for reconstructing the entire shape could be low. In this paper, we propose a novel paradigm that efficiently perceives accurate 3D object shape by incorporating visual and tactile observations, as well as prior knowledge of common object shapes learned from large-scale shape repositories. We use vision first, applying neural networks with learned shape priors to predict an object's 3D shape from a single-view color image. We then use tactile sensing to refine the shape; the robot actively touches the object regions where the visual prediction has high uncertainty. Our method efficiently builds the 3D shape of common objects from a color image and a small number of tactile explorations (around 10). Our setup is easy to apply and has potentials to help robots better perform grasping or manipulation tasks on real-world objects.Comment: IROS 2018. The first two authors contributed equally to this wor

    A Framework for Tumor Localization in Robot-Assisted Minimally Invasive Surgery

    Get PDF
    Manual palpation of tissue is frequently used in open surgery, e.g., for localization of tumors and buried vessels and for tissue characterization. The overall objective of this work is to explore how tissue palpation can be performed in Robot-Assisted Minimally Invasive Surgery (RAMIS) using laparoscopic instruments conventionally used in RAMIS. This thesis presents a framework where a surgical tool is moved teleoperatively in a manner analogous to the repetitive pressing motion of a finger during manual palpation. We interpret the changes in parameters due to this motion such as the applied force and the resulting indentation depth to accurately determine the variation in tissue stiffness. This approach requires the sensorization of the laparoscopic tool for force sensing. In our work, we have used a da Vinci needle driver which has been sensorized in our lab at CSTAR for force sensing using Fiber Bragg Grating (FBG). A computer vision algorithm has been developed for 3D surgical tool-tip tracking using the da Vinci \u27s stereo endoscope. This enables us to measure changes in surface indentation resulting from pressing the needle driver on the tissue. The proposed palpation framework is based on the hypothesis that the indentation depth is inversely proportional to the tissue stiffness when a constant pressing force is applied. This was validated in a telemanipulated setup using the da Vinci surgical system with a phantom in which artificial tumors were embedded to represent areas of different stiffnesses. The region with high stiffness representing tumor and region with low stiffness representing healthy tissue showed an average indentation depth change of 5.19 mm and 10.09 mm respectively while maintaining a maximum force of 8N during robot-assisted palpation. These indentation depth variations were then distinguished using the k-means clustering algorithm to classify groups of low and high stiffnesses. The results were presented in a colour-coded map. The unique feature of this framework is its use of a conventional laparoscopic tool and minimal re-design of the existing da Vinci surgical setup. Additional work includes a vision-based algorithm for tracking the motion of the tissue surface such as that of the lung resulting from respiratory and cardiac motion. The extracted motion information was analyzed to characterize the lung tissue stiffness based on the lateral strain variations as the surface inflates and deflates

    Tactile-Filter: Interactive Tactile Perception for Part Mating

    Full text link
    Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks. Our tactile sensing provides us with a lot of information regarding contact formations as well as geometric information about objects during any interaction. With this motivation, vision-based tactile sensors are being widely used for various robotic perception and control tasks. In this paper, we present a method for interactive perception using vision-based tactile sensors for a part mating task, where a robot can use tactile sensors and a feedback mechanism using a particle filter to incrementally improve its estimate of objects (pegs and holes) that fit together. To do this, we first train a deep neural network that makes use of tactile images to predict the probabilistic correspondence between arbitrarily shaped objects that fit together. The trained model is used to design a particle filter which is used twofold. First, given one partial (or non-unique) observation of the hole, it incrementally improves the estimate of the correct peg by sampling more tactile observations. Second, it selects the next action for the robot to sample the next touch (and thus image) which results in maximum uncertainty reduction to minimize the number of interactions during the perception task. We evaluate our method on several part-mating tasks with novel objects using a robot equipped with a vision-based tactile sensor. We also show the efficiency of the proposed action selection method against a naive method. See supplementary video at https://www.youtube.com/watch?v=jMVBg_e3gLw .Comment: Accepted at RSS202

    Tactile localization: dealing with uncertainty from the first touch

    Get PDF
    En aquesta tesi proposem un nou sistema per localitzar d'objectes amb sensors tàctils per a robòtica de manipulació, que tracta, de forma explícita, la incertesa inherent al sentit del tacte. Amb aquesta fi, estimem la completa distribució de probabilitat de la posició de l'objecte. A més a més, donat el model 3D de l'objecte en qüestió, el nostre sistema no requereix una exploració prèvia de l'objecte amb el sensor, podent localizar-lo des del primer contacte. Donat un senyal provinent del sensor tàctil, dividim l'estimació de la distribució de probabilitat de la posició de l'objecte en dos passos. Primer, abans de tocar l'objecte, definim un conjunt dens de posicions de l'objecte respecte al sensor, simulem el senyal que esperaríem rebre del sensor si l'objecte fos tocat en aquestes posicions, i entrenem una funció de semblança entre aquests senyals. Segon, mentre l'objecte està sent manipulat, comparem el senyal provinent del sensor amb els senyals simulats prèviament, i les semblances entre aquests donen la distribució de probabilitat discreta a l'espai de posicions de l'objecte respecte al sensor. Estenem aquesta feina analitzant l'escenari on múltiples sensors tàctils toquen l'objecte a la vegada. Fusionem les distribucions de probabilitat provinents dels diferents sensors per obtenir una distribució millor. Presentem resultats quantitatius per quatre objectes. També mostrem una aplicació d'aquest sistema en un sistema més gran i presentem recerca en la qual estem treballant actualment en percepció activa.En esta tesis proponemos un nuevos sistema para localizar objetos con sensores táctiles para robótica de manipulación, que trata, de forma explícita, la incertidumbre inherente al sentido del tacto. Con este fin, estimamos la completa distribución de probabilidad de la posición del objeto. Además, dado el modelo 3D del objeto que cuestión, nuestro sistema no requiere una exploración previa del objeto con el sensor, pudiendo localizarlo desde el primer contacto. Dada una señal proveniente del sensor táctil, dividimos la estimación de la distribución de probabilidad de la posición del objeto en dos pasos. Primero, antes de tocar el objeto, definimos un conjunto denso de posiciones del objeto respecto al sensor, simulamos la señal que esperaríamos recibir del sensor si el objeto fuese tocado en estas posiciones, y entrenamos una función de semejanza entre estas señales. Segundo, mientras el objeto está siendo manipulado, comparamos la señal proveniente del sensor con las señales simuladas previamente, y las semejanzas entre estas dan la distribución de probabilidad discreta en el espacio de posiciones del objeto respecto al sensor. Extendemos este trabajo analizando el escenario donde múltiples sensores táctiles tocan el objeto al mismo tiempo. Fusionamos las distribuciones de probabilidad que vienen de los diferentes sensores para obtener una distribución mejor. Presentamos resultados cuantitativos para cuatro objetos. También mostramos una aplicación de este sistema en un sistema más grande y presentamos investigación en la que estamos trabajando actualmente en percepción activa.In this thesis we present an approach to object tactile localization for robotic manipulation which explicitly deals with the uncertainty to overcome the locality of tactile sensing. To that purpose, we estimate full probability distributions of object pose. Moreover, given a 3D model of the object in question, our framework localizes from the first touch, meaning no physical exploration of the object is needed beforehand. Given a signal from the tactile sensor, we divide the estimation of a probability distribution of object pose in two main steps. First, before touching the object, we sample a dense set of poses of the object with respect to the sensor, we simulate the signal the sensor would get when touching the object at these poses, and we train a similarity function between these signals. In the second part, while manipulating the object, we compare the signal coming from the sensor to the set of previously simulated ones, and the similarities between these give the discretized probability distribution over the possible poses of the object with respect to the sensor. We extend this work by analyzing the scenario where multiple tactile sensors are touching the object at the same time, by fusing the probability distributions coming from the individual sensors to get a better distribution. We present quantitative results for four objects. We also present the application of this approach in a larger system and an ongoing research direction towards tactile active perception.Outgoin
    corecore