6 research outputs found

    Grip Stabilization through Independent Finger Tactile Feedback Control

    Get PDF
    Grip force control during robotic in-hand manipulation is usually modeled as a monolithic task, where complex controllers consider the placement of all fingers and the contact states between each finger and the gripped object in order to compute the necessary forces to be applied by each finger. Such approaches normally rely on object and contact models and do not generalize well to novel manipulation tasks. Here, we propose a modular grip stabilization method based on a proposition that explains how humans achieve grasp stability. In this biomimetic approach, independent tactile grip stabilization controllers ensure that slip does not occur locally at the engaged robot fingers. Local slip is predicted from the tactile signals of each fingertip sensor i.e., BioTac and BioTac SP by Syntouch. We show that stable grasps emerge without any form of central communication when such independent controllers are engaged in the control of multi-digit robotic hands. The resulting grasps are resistant to external perturbations while ensuring stable grips on a wide variety of objects

    Hierarchical tactile sensation integration from prosthetic fingertips enables multi-texture surface recognition\u3csup\u3e†\u3c/sup\u3e

    Get PDF
    Multifunctional flexible tactile sensors could be useful to improve the control of prosthetic hands. To that end, highly stretchable liquid metal tactile sensors (LMS) were designed, manufactured via photolithography, and incorporated into the fingertips of a prosthetic hand. Three novel contributions were made with the LMS. First, individual fingertips were used to distinguish between different speeds of sliding contact with different surfaces. Second, differences in surface textures were reliably detected during sliding contact. Third, the capacity for hierarchical tactile sensor integration was demonstrated by using four LMS signals simultaneously to distinguish between ten complex multi-textured surfaces. Four different machine learning algorithms were compared for their successful classification capabilities: K-nearest neighbor (KNN), support vector machine (SVM), random forest (RF), and neural network (NN). The time-frequency features of the LMSs were extracted to train and test the machine learning algorithms. The NN generally performed the best at the speed and texture detection with a single finger and had a 99.2 ± 0.8% accuracy to distinguish between ten different multi-textured surfaces using four LMSs from four fingers simultaneously. The capability for hierarchical multi-finger tactile sensation integration could be useful to provide a higher level of intelligence for artificial hands

    Supervised Learning and Reinforcement Learning of Feedback Models for Reactive Behaviors: Tactile Feedback Testbed

    Full text link
    Robots need to be able to adapt to unexpected changes in the environment such that they can autonomously succeed in their tasks. However, hand-designing feedback models for adaptation is tedious, if at all possible, making data-driven methods a promising alternative. In this paper we introduce a full framework for learning feedback models for reactive motion planning. Our pipeline starts by segmenting demonstrations of a complete task into motion primitives via a semi-automated segmentation algorithm. Then, given additional demonstrations of successful adaptation behaviors, we learn initial feedback models through learning from demonstrations. In the final phase, a sample-efficient reinforcement learning algorithm fine-tunes these feedback models for novel task settings through few real system interactions. We evaluate our approach on a real anthropomorphic robot in learning a tactile feedback task.Comment: Submitted to the International Journal of Robotics Research. Paper length is 21 pages (including references) with 12 figures. A video overview of the reinforcement learning experiment on the real robot can be seen at https://www.youtube.com/watch?v=WDq1rcupVM0. arXiv admin note: text overlap with arXiv:1710.0855

    Grip Stabilization through Independent Finger Tactile Feedback Control

    No full text
    Grip force control during robotic in-hand manipulation is usually modeled as a monolithic task, where complex controllers consider the placement of all fingers and the contact states between each finger and the gripped object in order to compute the necessary forces to be applied by each finger. Such approaches normally rely on object and contact models and do not generalize well to novel manipulation tasks. Here, we propose a modular grip stabilization method based on a proposition that explains how humans achieve grasp stability. In this biomimetic approach, independent tactile grip stabilization controllers ensure that slip does not occur locally at the engaged robot fingers. Local slip is predicted from the tactile signals of each fingertip sensor i.e., BioTac and BioTac SP by Syntouch. We show that stable grasps emerge without any form of central communication when such independent controllers are engaged in the control of multi-digit robotic hands. The resulting grasps are resistant to external perturbations while ensuring stable grips on a wide variety of objects
    corecore