35,829 research outputs found

    Search-based 3D Planning and Trajectory Optimization for Safe Micro Aerial Vehicle Flight Under Sensor Visibility Constraints

    Full text link
    Safe navigation of Micro Aerial Vehicles (MAVs) requires not only obstacle-free flight paths according to a static environment map, but also the perception of and reaction to previously unknown and dynamic objects. This implies that the onboard sensors cover the current flight direction. Due to the limited payload of MAVs, full sensor coverage of the environment has to be traded off with flight time. Thus, often only a part of the environment is covered. We present a combined allocentric complete planning and trajectory optimization approach taking these sensor visibility constraints into account. The optimized trajectories yield flight paths within the apex angle of a Velodyne Puck Lite 3D laser scanner enabling low-level collision avoidance to perceive obstacles in the flight direction. Furthermore, the optimized trajectories take the flight dynamics into account and contain the velocities and accelerations along the path. We evaluate our approach with a DJI Matrice 600 MAV and in simulation employing hardware-in-the-loop.Comment: In Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 201

    Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder

    Full text link
    In this paper, we present a hierarchical path planning framework called SG-RL (subgoal graphs-reinforcement learning), to plan rational paths for agents maneuvering in continuous and uncertain environments. By "rational", we mean (1) efficient path planning to eliminate first-move lags; (2) collision-free and smooth for agents with kinematic constraints satisfied. SG-RL works in a two-level manner. At the first level, SG-RL uses a geometric path-planning method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract paths, also called subgoal sequences. At the second level, SG-RL uses an RL method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal motion-planning policies which can generate kinematically feasible and collision-free trajectories between adjacent subgoals. The first advantage of the proposed method is that SSG can solve the limitations of sparse reward and local minima trap for RL agents; thus, LSPI can be used to generate paths in complex environments. The second advantage is that, when the environment changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI can deal with uncertainties by exploiting its generalization ability to handle changes in environments. Simulation experiments in representative scenarios demonstrate that, compared with existing methods, SG-RL can work well on large-scale maps with relatively low action-switching frequencies and shorter path lengths, and SG-RL can deal with small changes in environments. We further demonstrate that the design of reward functions and the types of training environments are important factors for learning feasible policies.Comment: 20 page

    A path planning and path-following control framework for a general 2-trailer with a car-like tractor

    Full text link
    Maneuvering a general 2-trailer with a car-like tractor in backward motion is a task that requires significant skill to master and is unarguably one of the most complicated tasks a truck driver has to perform. This paper presents a path planning and path-following control solution that can be used to automatically plan and execute difficult parking and obstacle avoidance maneuvers by combining backward and forward motion. A lattice-based path planning framework is developed in order to generate kinematically feasible and collision-free paths and a path-following controller is designed to stabilize the lateral and angular path-following error states during path execution. To estimate the vehicle state needed for control, a nonlinear observer is developed which only utilizes information from sensors that are mounted on the car-like tractor, making the system independent of additional trailer sensors. The proposed path planning and path-following control framework is implemented on a full-scale test vehicle and results from simulations and real-world experiments are presented.Comment: Preprin

    TiEV: The Tongji Intelligent Electric Vehicle in the Intelligent Vehicle Future Challenge of China

    Full text link
    TiEV is an autonomous driving platform implemented by Tongji University of China. The vehicle is drive-by-wire and is fully powered by electricity. We devised the software system of TiEV from scratch, which is capable of driving the vehicle autonomously in urban paths as well as on fast express roads. We describe our whole system, especially novel modules of probabilistic perception fusion, incremental mapping, the 1st and the 2nd planning and the overall safety concern. TiEV finished 2016 and 2017 Intelligent Vehicle Future Challenge of China held at Changshu. We show our experiences on the development of autonomous vehicles and future trends

    Applying MAPP Algorithm for Cooperative Path Finding in Urban Environments

    Full text link
    The paper considers the problem of planning a set of non-conflict trajectories for the coalition of intelligent agents (mobile robots). Two divergent approaches, e.g. centralized and decentralized, are surveyed and analyzed. Decentralized planner - MAPP is described and applied to the task of finding trajectories for dozens UAVs performing nap-of-the-earth flight in urban environments. Results of the experimental studies provide an opportunity to claim that MAPP is a highly efficient planner for solving considered types of tasks
    • …
    corecore