4 research outputs found

    High-Dimensional Motion Planning and Learning Under Uncertain Conditions

    Get PDF
    Many existing path planning methods do not adequately account for uncertainty. Without uncertainty these existing techniques work well, but in real world environments they struggle due to inaccurate sensor models, arbitrarily moving obstacles, and uncertain action consequences. For example, picking up and storing childrens toys is a simple task for humans. Yet, for a robotic household robot the task can be daunting. The room must be modeled with sensors, which may or may not detect all the strewn toys. The robot must be able to detect and avoid the child who may be moving the very toys that the robot is tasked with cleaning. Finally, if the robot missteps and places a foot on a toy, it must be able to compensate for the unexpected consequences of its actions. This example demonstrates that even simple human tasks are fraught with uncertainties that must be accounted for in robotic path planning algorithms. This work presents the first steps towards migrating sampling-based path planning methods to real world environments by addressing three different types of uncertainty: (1) model uncertainty, (2) spatio-temporal obstacle uncertainty (moving obstacles) and (3) action consequence uncertainty. Uncertainty is encoded directly into path planning through a data structure in order to successfully and efficiently identify safe robot paths in sensed environments with noise. This encoding produces comparable clearance paths to other planning methods which are a known for high clearance, but at an order of magnitude less computational cost. It also shows that formal control theory methods combined with path planning provides a technique that has a 95% collision-free navigation rate with 300 moving obstacles. Finally, it demonstrates that reinforcement learning can be combined with planning data structures to autonomously learn motion controls of a seven degree of freedom robot despite a low computational cost despite the number of dimensions

    Path Planning and Control of an Autonomous Quadrotor Testbed in a Cluttered Environment

    Get PDF
    A classical problem for robotic navigation is how to efficiently navigate from one point to another and what to do if obstacles are encountered along the way. Many map based path planning algorithms attempt to solve this problem, all with varying levels of optimality and complexity. This work shows a review of selected algorithms, and two of these are selected for simulation and testing using a quadrotor unmanned aerial vehicle (UAV) in a dynamic indoor environment which requires replanning capabilities. The Dynamic A* algorithm, or simply D*, and the Probabilistic Roadmap method (PRM) are used in a scenario designed to test their respective functionality and usefulness with the goal of determining the better algorithm for flight testing given a partially known or changing environment.;The development of the quadrotor platform hardware is discussed as well as the associated software and capabilities. Both algorithms are redesigned to fit this specific application and display their respective planned and replanned paths in an intuitive and comparable manner. Simulation is performed and an obstacle is added to the map during the quadrotor motion, requiring a replanned path. Results are compared for both computed path length and computational intensity. Flight testing is performed in an indoor environment, and during the flight an obstacle is inserted into the flight path, requiring detection and replanning. Results are compared for computed path length and intuitively analyzed to compare optimality and complexity
    corecore