110 research outputs found

    Stabilize to Act: Learning to Coordinate for Bimanual Manipulation

    Full text link
    Key to rich, dexterous manipulation in the real world is the ability to coordinate control across two hands. However, while the promise afforded by bimanual robotic systems is immense, constructing control policies for dual arm autonomous systems brings inherent difficulties. One such difficulty is the high-dimensionality of the bimanual action space, which adds complexity to both model-based and data-driven methods. We counteract this challenge by drawing inspiration from humans to propose a novel role assignment framework: a stabilizing arm holds an object in place to simplify the environment while an acting arm executes the task. We instantiate this framework with BimanUal Dexterity from Stabilization (BUDS), which uses a learned restabilizing classifier to alternate between updating a learned stabilization position to keep the environment unchanged, and accomplishing the task with an acting policy learned from demonstrations. We evaluate BUDS on four bimanual tasks of varying complexities on real-world robots, such as zipping jackets and cutting vegetables. Given only 20 demonstrations, BUDS achieves 76.9% task success across our task suite, and generalizes to out-of-distribution objects within a class with a 52.7% success rate. BUDS is 56.0% more successful than an unstructured baseline that instead learns a BC stabilizing policy due to the precision required of these complex tasks. Supplementary material and videos can be found at https://sites.google.com/view/stabilizetoact .Comment: Conference on Robot Learning, 202

    Assistance strategies for robotized laparoscopy

    Get PDF
    Robotizing laparoscopic surgery not only allows achieving better accuracy to operate when a scale factor is applied between master and slave or thanks to the use of tools with 3 DoF, which cannot be used in conventional manual surgery, but also due to additional informatic support. Relying on computer assistance different strategies that facilitate the task of the surgeon can be incorporated, either in the form of autonomous navigation or cooperative guidance, providing sensory or visual feedback, or introducing certain limitations of movements. This paper describes different ways of assistance aimed at improving the work capacity of the surgeon and achieving more safety for the patient, and the results obtained with the prototype developed at UPC.Peer ReviewedPostprint (author's final draft

    TOWARDS THE GROUNDING OF ABSTRACT CATEGORIES IN COGNITIVE ROBOTS

    Get PDF
    The grounding of language in humanoid robots is a fundamental problem, especially in social scenarios which involve the interaction of robots with human beings. Indeed, natural language represents the most natural interface for humans to interact and exchange information about concrete entities like KNIFE, HAMMER and abstract concepts such as MAKE, USE. This research domain is very important not only for the advances that it can produce in the design of human-robot communication systems, but also for the implication that it can have on cognitive science. Abstract words are used in daily conversations among people to describe events and situations that occur in the environment. Many scholars have suggested that the distinction between concrete and abstract words is a continuum according to which all entities can be varied in their level of abstractness. The work presented herein aimed to ground abstract concepts, similarly to concrete ones, in perception and action systems. This permitted to investigate how different behavioural and cognitive capabilities can be integrated in a humanoid robot in order to bootstrap the development of higher-order skills such as the acquisition of abstract words. To this end, three neuro-robotics models were implemented. The first neuro-robotics experiment consisted in training a humanoid robot to perform a set of motor primitives (e.g. PUSH, PULL, etc.) that hierarchically combined led to the acquisition of higher-order words (e.g. ACCEPT, REJECT). The implementation of this model, based on a feed-forward artificial neural networks, permitted the assessment of the training methodology adopted for the grounding of language in humanoid robots. In the second experiment, the architecture used for carrying out the first study was reimplemented employing recurrent artificial neural networks that enabled the temporal specification of the action primitives to be executed by the robot. This permitted to increase the combinations of actions that can be taught to the robot for the generation of more complex movements. For the third experiment, a model based on recurrent neural networks that integrated multi-modal inputs (i.e. language, vision and proprioception) was implemented for the grounding of abstract action words (e.g. USE, MAKE). Abstract representations of actions ("one-hot" encoding) used in the other two experiments, were replaced with the joints values recorded from the iCub robot sensors. Experimental results showed that motor primitives have different activation patterns according to the action's sequence in which they are embedded. Furthermore, the performed simulations suggested that the acquisition of concepts related to abstract action words requires the reactivation of similar internal representations activated during the acquisition of the basic concepts, directly grounded in perceptual and sensorimotor knowledge, contained in the hierarchical structure of the words used to ground the abstract action words.This study was financed by the EU project RobotDoC (235065) from the Seventh Framework Programme (FP7), Marie Curie Actions Initial Training Network

    Robust Control of Nonlinear Systems with applications to Aerial Manipulation and Self Driving Cars

    Get PDF
    This work considers the problem of planning and control of robots in an environment with obstacles and external disturbances. The safety of robots is harder to achieve when planning in such uncertain environments. We describe a robust control scheme that combines three key components: system identification, uncertainty propagation, and trajectory optimization. Using this control scheme we tackle three problems. First, we develop a Nonlinear Model Predictive Controller (NMPC) for articulated rigid bodies and apply it to an aerial manipulation system to grasp object mid-air. Next, we tackle the problem of obstacle avoidance under unknown external disturbances. We propose two approaches, the first approach using adaptive NMPC with open- loop uncertainty propagation and the second approach using Tube NMPC. After that, we introduce dynamic models which use Artificial Neural Networks (ANN) and combine them with NMPC to control a ground vehicle and an aerial manipulation system. Finally, we introduce a software framework for integrating the above algorithms to perform complex tasks. The software framework provides users with the ability to design systems that are robust to control and hardware failures where preventive action is taken before-hand. The framework also allows for safe testing of control and task logic in simulation before evaluating on the real robot. The software framework is applied to an aerial manipulation system to perform a package sorting task, and extensive experiments demonstrate the ability of the system to recover from failures. In addition to robust control, we present two related control problems. The first problem pertains to designing an obstacle avoidance controller for an underactuated system that is Lyapunov stable. We extend a standard gyroscopic obstacle avoidance controller to be applicable to an underactuated system. The second problem addresses the navigation of an Unmanned Ground Vehicle (UGV) on an unstructured terrain. We propose using NMPC combined with a high fidelity physics engine to generate a reference trajectory that is dynamically feasible and accounts for unsafe areas in the terrain

    Perception and Navigation in Autonomous Systems in the Era of Learning: A Survey

    Full text link
    Autonomous systems possess the features of inferring their own state, understanding their surroundings, and performing autonomous navigation. With the applications of learning systems, like deep learning and reinforcement learning, the visual-based self-state estimation, environment perception and navigation capabilities of autonomous systems have been efficiently addressed, and many new learning-based algorithms have surfaced with respect to autonomous visual perception and navigation. In this review, we focus on the applications of learning-based monocular approaches in ego-motion perception, environment perception and navigation in autonomous systems, which is different from previous reviews that discussed traditional methods. First, we delineate the shortcomings of existing classical visual simultaneous localization and mapping (vSLAM) solutions, which demonstrate the necessity to integrate deep learning techniques. Second, we review the visual-based environmental perception and understanding methods based on deep learning, including deep learning-based monocular depth estimation, monocular ego-motion prediction, image enhancement, object detection, semantic segmentation, and their combinations with traditional vSLAM frameworks. Then, we focus on the visual navigation based on learning systems, mainly including reinforcement learning and deep reinforcement learning. Finally, we examine several challenges and promising directions discussed and concluded in related research of learning systems in the era of computer science and robotics.Comment: This paper has been accepted by IEEE TNNL

    Context-aware learning for robot-assisted endovascular catheterization

    Get PDF
    Endovascular intervention has become a mainstream treatment of cardiovascular diseases. However, multiple challenges remain such as unwanted radiation exposures, limited two-dimensional image guidance, insufficient force perception and haptic cues. Fast evolving robot-assisted platforms improve the stability and accuracy of instrument manipulation. The master-slave system also removes radiation to the operator. However, the integration of robotic systems into the current surgical workflow is still debatable since repetitive, easy tasks have little value to be executed by the robotic teleoperation. Current systems offer very low autonomy, potential autonomous features could bring more benefits such as reduced cognitive workloads and human error, safer and more consistent instrument manipulation, ability to incorporate various medical imaging and sensing modalities. This research proposes frameworks for automated catheterisation with different machine learning-based algorithms, includes Learning-from-Demonstration, Reinforcement Learning, and Imitation Learning. Those frameworks focused on integrating context for tasks in the process of skill learning, hence achieving better adaptation to different situations and safer tool-tissue interactions. Furthermore, the autonomous feature was applied to next-generation, MR-safe robotic catheterisation platform. The results provide important insights into improving catheter navigation in the form of autonomous task planning, self-optimization with clinical relevant factors, and motivate the design of intelligent, intuitive, and collaborative robots under non-ionizing image modalities.Open Acces

    Simulation and Planning of a 3D Spray Painting Robotic System

    Get PDF
    Nesta dissertação é proposto um sistema robótico 3D de pintura com spray. Este sistema inclui uma simulação realista do spray com precisão suficiente para imitar pintura com spray real. Também inclui um algoritmo otimizado para geração de caminhos que é capaz de pintar projetos 3D não triviais. A simulação parte de CAD 3D ou peças digitalizadas em 3D e produz um efeito visual realista que permite analisar qualitativamente o produto pintado. Também é apresentada uma métrica de avaliação que pontua trajetória de pintura baseada na espessura, uniformidade, tempo e desperdício de tinta.In this dissertation a 3D spray painting robotic system is proposed. This system has realistic spray simulation with sufficient accuracy to mimic real spray painting. It also includes an optimized algorithm for path generation that is capable of painting non trivial 3D designs. The simulation has 3D CAD or 3D scanned input pieces and produces a realistic visual effect that allows qualitative analyses of the painted product. It is also presented an evaluation metric that scores the painting trajectory based on thickness, uniformity, time and waste of paint

    Spiking Neural Network that Maps from Generalized Coordinates to Cartesian Coordinates

    Get PDF
    In this thesis, I look to understand how insects compute task-level quantities by integrating range-fractionated sensory signals to create a sparse-spatial coding of Cartesian positions. I created biologically plausible 2-D and 3-D models of one species of the stick insect (Carausius morosus) leg and encoded the foot position through a spiking neural network. This model used spiking afferents from three angles of an insect leg which are integrated by one non-spiking interneuron. This model contains many dendritic compartments and one somatic compartment that encode the foot’s position relative to the body. The Functional Subnetwork Approach (FSA) was used to tune the conductances between the compartments (Szczecinski et al., 2017). Also, the Product of Exponentials (POE) was used to calculate the spatial kinematic chain of the stick insect leg (Murray et al., 1994). The system accurately encodes the foot position and depends on the width of the sensory encoding curves, or the “bell curves”. Discussion of limitations and other studies that relate to this work, as well as motivation for future work are included

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not
    corecore