61 research outputs found

    Vision and Learning for Deliberative Monocular Cluttered Flight

    Full text link
    Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    Stereo vision-based obstacle avoidance for micro air vehicles using an egocylindrical image space representation

    Get PDF
    Micro air vehicles which operate autonomously at low altitude in cluttered environments require a method for onboard obstacle avoidance for safe operation. Previous methods deploy either purely reactive approaches, mapping low-level visual features directly to actuator inputs to maneuver the vehicle around the obstacle, or deliberative methods that use on-board 3-D sensors to create a 3-D, voxel-based world model, which is then used to generate collision free 3-D trajectories. In this paper, we use forward-looking stereo vision with a large horizontal and vertical field of view and project range from stereo into a novel robot-centered, cylindrical, inverse range map we call an egocylinder. With this implementation we reduce the complexity of our world representation from a 3D map to a 2.5D image-space representation, which supports very efficient motion planning and collision checking, and allows to implement configuration space expansion as an image processing function directly on the egocylinder. Deploying a fast reactive motion planner directly on the configuration space expanded egocylinder image, we demonstrate the effectiveness of this new approach experimentally in an indoor environment

    Real-Time Planning with Multi-Fidelity Models for Agile Flights in Unknown Environments

    Full text link
    Autonomous navigation through unknown environments is a challenging task that entails real-time localization, perception, planning, and control. UAVs with this capability have begun to emerge in the literature with advances in lightweight sensing and computing. Although the planning methodologies vary from platform to platform, many algorithms adopt a hierarchical planning architecture where a slow, low-fidelity global planner guides a fast, high-fidelity local planner. However, in unknown environments, this approach can lead to erratic or unstable behavior due to the interaction between the global planner, whose solution is changing constantly, and the local planner; a consequence of not capturing higher-order dynamics in the global plan. This work proposes a planning framework in which multi-fidelity models are used to reduce the discrepancy between the local and global planner. Our approach uses high-, medium-, and low-fidelity models to compose a path that captures higher-order dynamics while remaining computationally tractable. In addition, we address the interaction between a fast planner and a slower mapper by considering the sensor data not yet fused into the map during the collision check. This novel mapping and planning framework for agile flights is validated in simulation and hardware experiments, showing replanning times of 5-40 ms in cluttered environments.Comment: ICRA 201
    • …
    corecore