5 research outputs found

    Encoding bi-manual coordination patterns from human demonstrations

    Get PDF
    ABSTRACT Humans perform tasks such as bowl mixing bi-manually, but programming them on a robot can be challenging specially in tasks that require force control or on-line stiffness modulation. In this paper we first propose a user-friendly setup for demonstrating bi-manual tasks, while collecting complementary information on motion and forces sensed on a robotic arm, as well as the human hand configuration and grasp information. Secondly for learning the task we propose a method for extracting task constraints for each arm and coordination patterns between the arms. We use a statistical encoding of the data based on the extracted constraints and reproduce the task using a cartesian impedance controller

    Asymptotically Optimal Motion Planning for Learned Tasks Using Time-Dependent Cost Maps

    Get PDF
    In unstructured environments in people’s homes and workspaces, robots executing a task may need to avoid obstacles while satisfying task motion constraints, e.g., keeping a plate of food level to avoid spills or properly orienting a finger to push a button. We introduce a sampling-based method for computing motion plans that are collision-free and minimize a cost metric that encodes task motion constraints. Our time-dependent cost metric, learned from a set of demonstrations, encodes features of a task’s motion that are consistent across the demonstrations and, hence, are likely required to successfully execute the task. Our sampling-based motion planner uses the learned cost metric to compute plans that simultaneously avoid obstacles and satisfy task constraints. The motion planner is asymptotically optimal and minimizes the Mahalanobis distance between the planned trajectory and the distribution of demonstrations in a feature space parameterized by the locations of task-relevant objects. The motion planner also leverages the distribution of the demonstrations to significantly reduce plan computation time. We demonstrate the method’s effectiveness and speed using a small humanoid robot performing tasks requiring both obstacle avoidance and satisfaction of learned task constraints

    Smart Navigation in Surgical Robotics

    Get PDF
    La cirugía mínimamente invasiva, y concretamente la cirugía laparoscópica, ha supuesto un gran cambio en la forma de realizar intervenciones quirúrgicas en el abdomen. Actualmente, la cirugía laparoscópica ha evolucionado hacia otras técnicas aún menos invasivas, como es la cirugía de un solo puerto, en inglés Single Port Access Surgery. Esta técnica consiste en realizar una única incisión, por la que son introducidos los instrumentos y la cámara laparoscópica a través de un único trocar multipuerto. La principal ventaja de esta técnica es una reducción de la estancia hospitalaria por parte del paciente, y los resultados estéticos, ya que el trocar se suele introducir por el ombligo, quedando la cicatriz oculta en él. Sin embargo, el hecho de que los instrumentos estén introducidos a través del mismo trocar hace la intervención más complicada para el cirujano, que necesita unas habilidades específicas para este tipo de intervenciones. Esta tesis trata el problema de la navegación de instrumentos quirúrgicos mediante plataformas robóticas teleoperadas en cirugía de un solo puerto. En concreto, se propone un método de navegación que dispone de un centro de rotación remoto virtual, el cuál coincide con el punto de inserción de los instrumentos (punto de fulcro). Para estimar este punto se han empleado las fuerzas ejercidas por el abdomen en los instrumentos quirúrgicos, las cuales han sido medidas por sensores de esfuerzos colocados en la base de los instrumentos. Debido a que estos instrumentos también interaccionan con tejido blando dentro del abdomen, lo cual distorsionaría la estimación del punto de inserción, es necesario un método que permita detectar esta circunstancia. Para solucionar esto, se ha empleado un detector de interacción con tejido basado en modelos ocultos de Markov el cuál se ha entrenado para detectar cuatro gestos genéricos. Por otro lado, en esta tesis se plantea el uso de guiado háptico para mejorar la experiencia del cirujano cuando utiliza plataformas robóticas teleoperadas. En concreto, se propone la técnica de aprendizaje por demostración (Learning from Demonstration) para generar fuerzas que puedan guiar al cirujano durante la resolución de tareas específicas. El método de navegación propuesto se ha implantado en la plataforma quirúrgica CISOBOT, desarrollada por la Universidad de Málaga. Los resultados experimentales obtenidos validan tanto el método de navegación propuesto, como el detector de interacción con tejido blando. Por otro lado, se ha realizado un estudio preliminar del sistema de guiado háptico. En concreto, se ha empleado una tarea genérica, la inserción de una clavija, para realizar los experimentos necesarios que permitan demostrar que el método propuesto es válido para resolver esta tarea y otras similares

    Learning of Generalized Manipulation Strategies in Service Robotics

    Get PDF
    This thesis makes a contribution to autonomous robotic manipulation. The core is a novel constraint-based representation of manipulation tasks suitable for flexible online motion planning. Interactive learning from natural human demonstrations is combined with parallelized optimization to enable efficient learning of complex manipulation tasks with limited training data. Prior planning results are encoded automatically into the model to reduce planning time and solve the correspondence problem

    Robots that Learn and Plan — Unifying Robot Learning and Motion Planning for Generalized Task Execution

    Get PDF
    Robots have the potential to assist people with a variety of everyday tasks, but to achieve that potential robots require software capable of planning and executing motions in cluttered environments. To address this, over the past few decades, roboticists have developed numerous methods for planning motions to avoid obstacles with increasingly stronger guarantees, from probabilistic completeness to asymptotic optimality. Some of these methods have even considered the types of constraints that must be satisfied to perform useful tasks, but these constraints must generally be manually specified. In recent years, there has been a resurgence of methods for automatic learning of tasks from human-provided demonstrations. Unfortunately, these two fields, task learning and motion planning, have evolved largely separate from one another, and the learned models are often not usable by motion planners. In this thesis, we aim to bridge the gap between robot task learning and motion planning by employing a learned task model that can subsequently be leveraged by an asymptotically-optimal motion planner to autonomously execute the task. First, we show that application of a motion planner enables task performance while avoiding novel obstacles and extend this to dynamic environments by replanning at reactive rates. Second, we generalize the method to accommodate time-invariant model parameters, allowing more information to be gleaned from the demonstrations. Third, we describe a more principled approach to temporal registration for such learning methods that mirrors the ultimate integration with a motion planner and often reduces the number of demonstrations required. Finally, we extend this framework to the domain of mobile manipulation. We empirically evaluate each of these contributions on multiple household tasks using the Aldebaran Nao, Rethink Robotics Baxter, and Fetch mobile manipulator robots to show that these approaches improve task execution success rates and reduce the amount of human-provided information required.Doctor of Philosoph
    corecore