550 research outputs found

    Learning Compliant Manipulation through Kinesthetic and Tactile Human-Robot Interaction

    Get PDF
    Robot Learning from Demonstration (RLfD) has been identified as a key element for making robots useful in daily lives. A wide range of techniques has been proposed for deriving a task model from a set of demonstrations of the task. Most previous works use learning to model the kinematics of the task, and for autonomous execution the robot then relies on a stiff position controller. While many tasks can and have been learned this way, there are tasks in which controlling the position alone is insufficient to achieve the goals of the task. These are typically tasks that involve contact or require a specific response to physical perturbations. The question of how to adjust the compliance to suit the need of the task has not yet been fully treated in Robot Learning from Demonstration. In this paper, we address this issue and present interfaces that allow a human teacher to indicate compliance variations by physically interacting with the robot during task execution. We validate our approach in two different experiments on the 7 DoF Barrett WAM and KUKA LWR robot manipulators. Furthermore, we conduct a user study to evaluate the usability of our approach from a non-roboticists perspective

    Dataset with Tactile and Kinesthetic Information from a Human Forearm and Its Application to Deep Learning

    Get PDF
    There are physical Human–Robot Interaction (pHRI) applications where the robot has to grab the human body, such as rescue or assistive robotics. Being able to precisely estimate the grasping location when grabbing a human limb is crucial to perform a safe manipulation of the human. Computer vision methods provide pre-grasp information with strong constraints imposed by the field environments. Force-based compliant control, after grasping, limits the amount of applied strength. On the other hand, valuable tactile and proprioceptive information can be obtained from the pHRI gripper, which can be used to better know the features of the human and the contact state between the human and the robot. This paper presents a novel dataset of tactile and kinesthetic data obtained from a robot gripper that grabs a human forearm. The dataset is collected with a three-fingered gripper with two underactuated fingers and a fixed finger with a high-resolution tactile sensor. A palpation procedure is performed to record the shape of the forearm and to recognize the bones and muscles in different sections. Moreover, an application for the use of the database is included. In particular, a fusion approach is used to estimate the actual grasped forearm section using both kinesthetic and tactile information on a regression deep-learning neural network. First, tactile and kinesthetic data are trained separately with Long Short-Term Memory (LSTM) neural networks, considering the data are sequential. Then, the outputs are fed to a Fusion neural network to enhance the estimation. The experiments conducted show good results in training both sources separately, with superior performance when the fusion approach is considered.This research was funded by the University of Málaga, the Ministerio de Ciencia, Innovación y Universidades, Gobierno de España, grant number RTI2018-093421-B-I00 and the European Commission, grant number BES-2016-078237. Partial funding for open access charge: Universidad de Málag

    Haptics in Robot-Assisted Surgery: Challenges and Benefits

    Get PDF
    Robotic surgery is transforming the current surgical practice, not only by improving the conventional surgical methods but also by introducing innovative robot-enhanced approaches that broaden the capabilities of clinicians. Being mainly of man-machine collaborative type, surgical robots are seen as media that transfer pre- and intra-operative information to the operator and reproduce his/her motion, with appropriate filtering, scaling, or limitation, to physically interact with the patient. The field, however, is far from maturity and, more critically, is still a subject of controversy in medical communities. Limited or absent haptic feedback is reputed to be among reasons that impede further spread of surgical robots. In this paper objectives and challenges of deploying haptic technologies in surgical robotics is discussed and a systematic review is performed on works that have studied the effects of providing haptic information to the users in major branches of robotic surgery. It has been tried to encompass both classical works and the state of the art approaches, aiming at delivering a comprehensive and balanced survey both for researchers starting their work in this field and for the experts

    Tactile Sensing for Robotic Applications

    Get PDF
    This chapter provides an overview of tactile sensing in robotics. This chapter is an attempt to answer three basic questions: \u2022 What is meant by Tactile Sensing? \u2022 Why Tactile Sensing is important? \u2022 How Tactile Sensing is achieved? The chapter is organized to sequentially provide the answers to above basic questions. Tactile sensing has often been considered as force sensing, which is not wholly true. In order to clarify such misconceptions about tactile sensing, it is defined in section 2. Why tactile section is important for robotics and what parameters are needed to be measured by tactile sensors to successfully perform various tasks, are discussed in section 3. An overview of `How tactile sensing has been achieved\u2019 is given in section 4, where a number of technologies and transduction methods, that have been used to improve the tactile sensing capability of robotic devices, are discussed. Lack of any tactile analog to Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Devices (CCD) optical arrays has often been cited as one of the reasons for the slow development of tactile sensing vis-\ue0-vis other sense modalities like vision sensing. Our own contribution \u2013 development of tactile sensing arrays using piezoelectric polymers and involving silicon micromachining - is an attempt in the direction of achieving tactile analog of CMOS optical arrays. The first phase implementation of these tactile sensing arrays is discussed in section 5. Section 6 concludes the chapter with a brief discussion on the present status of tactile sensing and the challenges that remain to be solved

    Model-free Probabilistic Movement Primitives for physical interaction

    Get PDF
    Physical interaction in robotics is a complex problem that requires not only accurate reproduction of the kinematic trajectories but also of the forces and torques exhibited during the movement. We base our approach on Movement Primitives (MP), as MPs provide a framework for modelling complex movements and introduce useful operations on the movements, such as generalization to novel situations, time scaling, and others. Usually, MPs are trained with imitation learning, where an expert demonstrates the trajectories. However, MPs used in physical interaction either require additional learning approaches, e.g., reinforcement learning, or are based on handcrafted solutions. Our goal is to learn and generate movements for physical interaction that are learned with imitation learning, from a small set of demonstrated trajectories. The Probabilistic Movement Primitives (ProMPs) framework is a recent MP approach that introduces beneficial properties, such as combination and blending of MPs, and represents the correlations present in the movement. The ProMPs provides a variable stiffness controller that reproduces the movement but it requires a dynamics model of the system. Learning such a model is not a trivial task, and, therefore, we introduce the model-free ProMPs, that are learning jointly the movement and the necessary actions from a few demonstrations. We derive a variable stiffness controller analytically. We further extent the ProMPs to include force and torque signals, necessary for physical interaction. We evaluate our approach in simulated and real robot tasks

    ILoSA: Interactive Learning of Stiffness and Attractors

    Full text link
    Teaching robots how to apply forces according to our preferences is still an open challenge that has to be tackled from multiple engineering perspectives. This paper studies how to learn variable impedance policies where both the Cartesian stiffness and the attractor can be learned from human demonstrations and corrections with a user-friendly interface. The presented framework, named ILoSA, uses Gaussian Processes for policy learning, identifying regions of uncertainty and allowing interactive corrections, stiffness modulation and active disturbance rejection. The experimental evaluation of the framework is carried out on a Franka-Emika Panda in three separate cases with unique force interaction properties: 1) pulling a plug wherein a sudden force discontinuity occurs upon successful removal of the plug, 2) pushing a box where a sustained force is required to keep the robot in motion, and 3) wiping a whiteboard in which the force is applied perpendicular to the direction of movement
    • …
    corecore