580 research outputs found

    Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal

    Full text link
    Model-free reinforcement learning has recently been shown to be effective at learning navigation policies from complex image input. However, these algorithms tend to require large amounts of interaction with the environment, which can be prohibitively costly to obtain on robots in the real world. We present an approach for efficiently learning goal-directed navigation policies on a mobile robot, from only a single coverage traversal of recorded data. The navigation agent learns an effective policy over a diverse action space in a large heterogeneous environment consisting of more than 2km of travel, through buildings and outdoor regions that collectively exhibit large variations in visual appearance, self-similarity, and connectivity. We compare pretrained visual encoders that enable precomputation of visual embeddings to achieve a throughput of tens of thousands of transitions per second at training time on a commodity desktop computer, allowing agents to learn from millions of trajectories of experience in a matter of hours. We propose multiple forms of computationally efficient stochastic augmentation to enable the learned policy to generalise beyond these precomputed embeddings, and demonstrate successful deployment of the learned policy on the real robot without fine tuning, despite environmental appearance differences at test time. The dataset and code required to reproduce these results and apply the technique to other datasets and robots is made publicly available at rl-navigation.github.io/deployable

    An Innovative Mission Management System for Fixed-Wing UAVs

    Get PDF
    This paper presents two innovative units linked together to build the main frame of a UAV Mis- sion Management System. The first unit is a Path Planner for small UAVs able to generate optimal paths in a tridimensional environment, generat- ing flyable and safe paths with the lowest com- putational effort. The second unit is the Flight Management System based on Nonlinear Model Predictive Control, that tracks the reference path and exploits a spherical camera model to avoid unpredicted obstacles along the path. The control system solves on-line (i.e. at each sampling time) a finite horizon (state horizon) open loop optimal control problem with a Genetic Algorithm. This algorithm finds the command sequence that min- imizes the tracking error with respect to the ref- erence path, driving the aircraft far from sensed obstacles and towards the desired trajectory

    Conferring robustness to path-planning for image-based control

    Get PDF
    Path-planning has been proposed in visual servoing for reaching the desired location while fulfilling various constraints. Unfortunately, the real trajectory can be significantly different from the reference trajectory due to the presence of uncertainties on the model used, with the consequence that some constraints may not be fulfilled hence leading to a failure of the visual servoing task. This paper proposes a new strategy for addressing this problem, where the idea consists of conferring robustness to the path-planning scheme by considering families of admissible models. In order to obtain these families, uncertainty in the form of random variables is introduced on the available image points and intrinsic parameters. Two families are considered, one by generating a given number of admissible models corresponding to extreme values of the uncertainty, and one by estimating the extreme values of the components of the admissible models. Each model of these families identifies a reference trajectory, which is parametrized by design variables that are common to all the models. The design variables are hence determined by imposing that all the reference trajectories fulfill the required constraints. Discussions on the convergence and robustness of the proposed strategy are provided, in particular showing that the satisfaction of the visibility and workspace constraints for the second family ensures the satisfaction of these constraints for all models bounded by this family. The proposed strategy is illustrated through simulations and experiments. © 2011 IEEE.published_or_final_versio

    Compositional Servoing by Recombining Demonstrations

    Full text link
    Learning-based manipulation policies from image inputs often show weak task transfer capabilities. In contrast, visual servoing methods allow efficient task transfer in high-precision scenarios while requiring only a few demonstrations. In this work, we present a framework that formulates the visual servoing task as graph traversal. Our method not only extends the robustness of visual servoing, but also enables multitask capability based on a few task-specific demonstrations. We construct demonstration graphs by splitting existing demonstrations and recombining them. In order to traverse the demonstration graph in the inference case, we utilize a similarity function that helps select the best demonstration for a specific task. This enables us to compute the shortest path through the graph. Ultimately, we show that recombining demonstrations leads to higher task-respective success. We present extensive simulation and real-world experimental results that demonstrate the efficacy of our approach.Comment: http://compservo.cs.uni-freiburg.d

    Image space trajectory tracking of 6-DOF robot manipulator in assisting visual servoing

    Get PDF
    As vision is a versatile sensor, vision-based control of robot is becoming more important in industrial applications. The control signal generated using the traditional control algorithms leads to undesirable movement of the end-effector during the positioning task. This movement may sometimes cause task failure due to visibility loss. In this paper, a sliding mode controller (SMC) is designed to track 2D image features in an image-based visual servoing task. The feature trajectory tracking helps to keep the image features always in the camera field of view and thereby ensures the shortest trajectory of the end-effector. SMC is the right choice to handle the depth uncertainties associated with translational motion. Stability of the closed-loop system with the proposed controller is proved by the Lyapunov method. Three feature trajectories are generated to test the efficacy of the proposed method. Simulation tests are conducted and the superiority of the proposed method over a Proportional Derivative – Sliding Mode Controller (PD-SMC) in terms of settling time and distance travelled by the end-effector is established in the presence and absence of depth uncertainties. The proposed controller is also tested in real-time by integrating the visual servoing system with a 6-DOF industrial robot manipulator, ABB IRB 1200

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    Robust fulfillment of constraints in robot visual servoing

    Full text link
    [EN] In this work, an approach based on sliding mode ideas is proposed to satisfy constraints in robot visual servoing. In particular, different types of constraints are defined in order to: fulfill the visibility constraints (camera fieldof-view and occlusions) for the image features of the detected object; to avoid exceeding the joint range limits and maximum joint speeds; and to avoid forbidden areas in the robot workspace. Moreover, another task with low-priority is considered to track the target object. The main advantages of the proposed approach are low computational cost, robustness and fully utilization of the allowed space for the constraints. The applicability and effectiveness of the proposed approach is demonstrated by simulation results for a simple 2D case and a complex 3D case study. Furthermore, the feasibility and robustness of the proposed approach is substantiated by experimental results using a conventional 6R industrial manipulator.This work was supported in part by the Spanish Government under grants BES-2010-038486 and Project DPI2013-42302-R, and the Generalitat Valenciana under grants VALi+d APOSTD/2016/044 and BEST/2017/029.Muñoz-Benavent, P.; Gracia Calandin, LI.; Solanes Galbis, JE.; Esparza Peidro, A.; Tornero Montserrat, J. (2018). Robust fulfillment of constraints in robot visual servoing. Control Engineering Practice. 71(1):79-95. https://doi.org/10.1016/j.conengprac.2017.10.017S799571
    corecore