1,021 research outputs found

    Asymptotically optimal inspection planning via efficient near-optimal search on sampled roadmaps

    Get PDF
    Inspection planning, the task of planning motions for a robot that enable it to inspect a set of points of interest, has applications in domains such as industrial, field, and medical robotics. Inspection planning can be computationally challenging, as the search space over motion plans grows exponentially with the number of points of interest to inspect. We propose a novel method, Incremental Random Inspection-roadmap Search (IRIS), that computes inspection plans whose length and set of successfully inspected points asymptotically converge to those of an optimal inspection plan. IRIS incrementally densifies a motion-planning roadmap using a sampling-based algorithm and performs efficient near-optimal graph search over the resulting roadmap as it is generated. We prove the resulting algorithm is asymptotically optimal under very general assumptions about the robot and the environment. We demonstrate IRIS’s efficacy on a simulated inspection task with a planar five DOF manipulator, on a simulated bridge inspection task with an Unmanned Aerial Vehicle (UAV), and on a medical endoscopic inspection task for a continuum parallel surgical robot in cluttered human anatomy. In all these systems IRIS computes higher-quality inspection plans orders of magnitudes faster than a prior state-of-the-art method

    Belief-space Planning for Active Visual SLAM in Underwater Environments.

    Full text link
    Autonomous mobile robots operating in a priori unknown environments must be able to integrate path planning with simultaneous localization and mapping (SLAM) in order to perform tasks like exploration, search and rescue, inspection, reconnaissance, target-tracking, and others. This level of autonomy is especially difficult in underwater environments, where GPS is unavailable, communication is limited, and environment features may be sparsely- distributed. In these situations, the path taken by the robot can drastically affect the performance of SLAM, so the robot must plan and act intelligently and efficiently to ensure successful task completion. This document proposes novel research in belief-space planning for active visual SLAM in underwater environments. Our motivating application is ship hull inspection with an autonomous underwater robot. We design a Gaussian belief-space planning formulation that accounts for the randomness of the loop-closure measurements in visual SLAM and serves as the mathematical foundation for the research in this thesis. Combining this planning formulation with sampling-based techniques, we efficiently search for loop-closure actions throughout the environment and present a two-step approach for selecting revisit actions that results in an opportunistic active SLAM framework. The proposed active SLAM method is tested in hybrid simulations and real-world field trials of an underwater robot performing inspections of a physical modeling basin and a U.S. Coast Guard cutter. To reduce computational load, we present research into efficient planning by compressing the representation and examining the structure of the underlying SLAM system. We propose the use of graph sparsification methods online to reduce complexity by planning with an approximate distribution that represents the original, full pose graph. We also propose the use of the Bayes tree data structure—first introduced for fast inference in SLAM—to perform efficient incremental updates when evaluating candidate plans that are similar. As a final contribution, we design risk-averse objective functions that account for the randomness within our planning formulation. We show that this aversion to uncertainty in the posterior belief leads to desirable and intuitive behavior within active SLAM.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133303/1/schaves_1.pd

    Advanced perception, navigation and planning for autonomous in-water ship hull inspection

    Get PDF
    Inspection of ship hulls and marine structures using autonomous underwater vehicles has emerged as a unique and challenging application of robotics. The problem poses rich questions in physical design and operation, perception and navigation, and planning, driven by difficulties arising from the acoustic environment, poor water quality and the highly complex structures to be inspected. In this paper, we develop and apply algorithms for the central navigation and planning problems on ship hulls. These divide into two classes, suitable for the open, forward parts of a typical monohull, and for the complex areas around the shafting, propellers and rudders. On the open hull, we have integrated acoustic and visual mapping processes to achieve closed-loop control relative to features such as weld-lines and biofouling. In the complex area, we implemented new large-scale planning routines so as to achieve full imaging coverage of all the structures, at a high resolution. We demonstrate our approaches in recent operations on naval ships.United States. Office of Naval Research (Grant N00014-06-10043)United States. Office of Naval Research (Grant N00014-07-1-0791

    Autonomous Navigation for Unmanned Aerial Systems - Visual Perception and Motion Planning

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Advancing Robot Autonomy for Long-Horizon Tasks

    Full text link
    Autonomous robots have real-world applications in diverse fields, such as mobile manipulation and environmental exploration, and many such tasks benefit from a hands-off approach in terms of human user involvement over a long task horizon. However, the level of autonomy achievable by a deployment is limited in part by the problem definition or task specification required by the system. Task specifications often require technical, low-level information that is unintuitive to describe and may result in generic solutions, burdening the user technically both before and after task completion. In this thesis, we aim to advance task specification abstraction toward the goal of increasing robot autonomy in real-world scenarios. We do so by tackling problems that address several different angles of this goal. First, we develop a way for the automatic discovery of optimal transition points between subtasks in the context of constrained mobile manipulation, removing the need for the human to hand-specify these in the task specification. We further propose a way to automatically describe constraints on robot motion by using demonstrated data as opposed to manually-defined constraints. Then, within the context of environmental exploration, we propose a flexible task specification framework, requiring just a set of quantiles of interest from the user that allows the robot to directly suggest locations in the environment for the user to study. We next systematically study the effect of including a robot team in the task specification and show that multirobot teams have the ability to improve performance under certain specification conditions, including enabling inter-robot communication. Finally, we propose methods for a communication protocol that autonomously selects useful but limited information to share with the other robots.Comment: PhD dissertation. 160 page
    • …
    corecore