259 research outputs found

    Proximity and Visuotactile Point Cloud Fusion for Contact Patches in Extreme Deformation

    Full text link
    Equipping robots with the sense of touch is critical to emulating the capabilities of humans in real world manipulation tasks. Visuotactile sensors are a popular tactile sensing strategy due to data output compatible with computer vision algorithms and accurate, high resolution estimates of local object geometry. However, these sensors struggle to accommodate high deformations of the sensing surface during object interactions, hindering more informative contact with cm-scale objects frequently encountered in the real world. The soft interfaces of visuotactile sensors are often made of hyperelastic elastomers, which are difficult to simulate quickly and accurately when extremely deformed for tactile information. Additionally, many visuotactile sensors that rely on strict internal light conditions or pattern tracking will fail if the surface is highly deformed. In this work, we propose an algorithm that fuses proximity and visuotactile point clouds for contact patch segmentation that is entirely independent from membrane mechanics. This algorithm exploits the synchronous, high-res proximity and visuotactile modalities enabled by an extremely deformable, selectively transmissive soft membrane, which uses visible light for visuotactile sensing and infrared light for proximity depth. We present the hardware design, membrane fabrication, and evaluation of our contact patch algorithm in low (10%), medium (60%), and high (100%+) membrane strain states. We compare our algorithm against three baselines: proximity-only, tactile-only, and a membrane mechanics model. Our proposed algorithm outperforms all baselines with an average RMSE under 2.8mm of the contact patch geometry across all strain ranges. We demonstrate our contact patch algorithm in four applications: varied stiffness membranes, torque and shear-induced wrinkling, closed loop control for whole body manipulation, and pose estimation

    Perceiving Extrinsic Contacts from Touch Improves Learning Insertion Policies

    Full text link
    Robotic manipulation tasks such as object insertion typically involve interactions between object and environment, namely extrinsic contacts. Prior work on Neural Contact Fields (NCF) use intrinsic tactile sensing between gripper and object to estimate extrinsic contacts in simulation. However, its effectiveness and utility in real-world tasks remains unknown. In this work, we improve NCF to enable sim-to-real transfer and use it to train policies for mug-in-cupholder and bowl-in-dishrack insertion tasks. We find our model NCF-v2, is capable of estimating extrinsic contacts in the real-world. Furthermore, our insertion policy with NCF-v2 outperforms policies without it, achieving 33% higher success and 1.36x faster execution on mug-in-cupholder, and 13% higher success and 1.27x faster execution on bowl-in-dishrack.Comment: Under revie

    Object manipulation with high-resolution and highly deformable tactile sensors

    Get PDF
    En aquest projecte explorem l'aplicació dels sensors tàctils a la manipulació robòtica. Utilitzem els sensors Soft Bubbles que estan formats per dues bombolles de làtex deformables i proporcionen informació tàctil sobre el seu estat. En aquest treball proposem un paradigma de control basat en l'estructura sentir-raonar-actuar que tracta de produir trajectòries desitjades amb els objectes manipulats, tot adreçant els reptes en representació i predicció que sorgeixen de l'ús de sensors tàctils distribuïts. Aquest paradigma està format per un model d'observació que infereix la posició dels objectes, un model dinàmic capaç de predir les deformacions complexes en el sensor i un controlador predictiu basat en model que optimitza les seqüències d'accions per tal d'aconseguir el comportament desitjat durant la manipulació. Aquest mètode l'apliquem a la tasca de pivotar objectes dins la mà robòtica i el comparem amb altres estratègies de l'estat de l'art.En este proyecto exploramos la aplicación de sensores táctiles a la manipulación robótica. Utilizamos los sensores Soft Bubbles formados por dos burbujas de látex deformables que proporcionan información táctil sobre su estado. En este trabajo proponemos un paradigma de control basado en la estructura sentir-razonar-actuar que trata de producir trayectorias deseadas con los objetos manipulados, afrontando los retos en representación y predicción que surgen del uso de sensores táctiles distribuidos. Este paradigma está formado por un modelo observacional que infiere la posición de los objetos, un modelo dinámico capaz de predecir las complejas deformaciones en el sensor y un controlador predictivo basado en modelo que optimiza las secuencias de acciones para conseguir el comportamiento deseado durante la manipulación. Este método lo aplicamos a la tarea de pivotar objetos dentro de la mano robótica y lo comparamos con otras estrategias del estado del arte.In this project, we explore the application of tactile sensing as an enabling technology for dexterous robotic manipulation. We use the Soft Bubbles Sensors which consist of a compliant end-effector providing feedback on contact patches. We propose a novel sense-reason-act paradigm that tries to produce desired object trajectories by addressing the challenges of representation and prediction that arise from using distributed tactile sensors. This paradigm is built upon an observation model, which infers the pose of a grasped object, a dynamics model of the membrane capable of predicting complex deformations in the bubbles, and a model-predictive controller responsible for optimizing the action sequences to achieve the goal behavior during manipulation. Our approach is applied to the robotic task of in-hand pivoting and benchmarked against other state-of-the-art methodologies in object manipulation.Outgoin

    Deformable Objects for Virtual Environments

    Get PDF

    Object Recognition and Localization : the Role of Tactile Sensors

    Get PDF
    Tactile sensors, because of their intrinsic insensitivity to lighting conditions and water turbidity, provide promising opportunities for augmenting the capabilities of vision sensors in applications involving object recognition and localization. This thesis presents two approaches for haptic object recognition and localization for ground and underwater environments. The first approach called Batch Ransac and Iterative Closest Point augmented Sequential Filter (BRICPSF) is based on an innovative combination of a sequential filter, Iterative-Closest-Point algorithm, and a feature-based Random Sampling and Consensus (RANSAC) algorithm for database matching. It can handle a large database of 3D-objects of complex shapes and performs a complete six-degree-of-freedom localization of static objects. The algorithms are validated by experimentation in simulation and using actual hardware. To our knowledge this is the first instance of haptic object recognition and localization in underwater environments. The second approach is biologically inspired, and provides a close integration between exploration and recognition. An edge following exploration strategy is developed that receives feedback from the current state of recognition. A recognition by parts approach is developed which uses BRICPSF for object part recognition. Object exploration is either directed to explore a part until it is successfully recognized, or is directed towards new parts to endorse the current recognition belief. This approach is validated by simulation experiments
    corecore