92 research outputs found

    Efficient Optical flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone

    Get PDF
    Miniature Micro Aerial Vehicles (MAV) are very suitable for flying in indoor environments, but autonomous navigation is challenging due to their strict hardware limitations. This paper presents a highly efficient computer vision algorithm called Edge-FS for the determination of velocity and depth. It runs at 20 Hz on a 4 g stereo camera with an embedded STM32F4 microprocessor (168 MHz, 192 kB) and uses feature histograms to calculate optical flow and stereo disparity. The stereo-based distance estimates are used to scale the optical flow in order to retrieve the drone's velocity. The velocity and depth measurements are used for fully autonomous flight of a 40 g pocket drone only relying on on-board sensors. The method allows the MAV to control its velocity and avoid obstacles

    Correlation Flow: Robust Optical Flow Using Kernel Cross-Correlators

    Full text link
    Robust velocity and position estimation is crucial for autonomous robot navigation. The optical flow based methods for autonomous navigation have been receiving increasing attentions in tandem with the development of micro unmanned aerial vehicles. This paper proposes a kernel cross-correlator (KCC) based algorithm to determine optical flow using a monocular camera, which is named as correlation flow (CF). Correlation flow is able to provide reliable and accurate velocity estimation and is robust to motion blur. In addition, it can also estimate the altitude velocity and yaw rate, which are not available by traditional methods. Autonomous flight tests on a quadcopter show that correlation flow can provide robust trajectory estimation with very low processing power. The source codes are released based on the ROS framework.Comment: 2018 International Conference on Robotics and Automation (ICRA 2018

    A novel distributed architecture for UAV indoor navigation

    Get PDF
    Abstract In the last decade, different indoor flight navigation systems for small Unmanned Aerial Vehicles (UAVs) have been investigated, with a special focus on different configurations and on sensor technologies. The main idea of this paper is to propose a distributed Guidance Navigation and Control (GNC) system architecture, based on Robotic Operation System (ROS) for light weight UAV autonomous indoor flight. The proposed framework is shown to be more robust and flexible than common configurations. A flight controller and companion computer running ROS for control and navigation are also included in the section. Both hardware and software diagrams are given to show the complete architecture. Further works will be based on the experimental validation of the proposed configuration by indoor flight tests

    Vision-based localization methods under GPS-denied conditions

    Full text link
    This paper reviews vision-based localization methods in GPS-denied environments and classifies the mainstream methods into Relative Vision Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss the broad application of optical flow in feature extraction-based Visual Odometry (VO) solutions and introduce advanced optical flow estimation methods. For AVL, we review recent advances in Visual Simultaneous Localization and Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman Filter (EKF) based methods. We also introduce the application of offline map registration and lane vision detection schemes to achieve Absolute Visual Localization. This paper compares the performance and applications of mainstream methods for visual localization and provides suggestions for future studies.Comment: 32 pages, 15 figure

    An Open Source and Open Hardware Deep Learning-Powered Visual Navigation Engine for Autonomous Nano-UAVs

    Get PDF
    Nano-size unmanned aerial vehicles (UAVs), with few centimeters of diameter and sub-10 Watts of total power budget, have so far been considered incapable of running sophisticated visual-based autonomous navigation software without external aid from base-stations, ad-hoc local positioning infrastructure, and powerful external computation servers. In this work, we present what is, to the best of our knowledge, the first 27g nano-UAV system able to run aboard an end-to-end, closed-loop visual pipeline for autonomous navigation based on a state-of-the-art deep-learning algorithm, built upon the open-source CrazyFlie 2.0 nano-quadrotor. Our visual navigation engine is enabled by the combination of an ultra-low power computing device (the GAP8 system-on-chip) with a novel methodology for the deployment of deep convolutional neural networks (CNNs). We enable onboard real-time execution of a state-of-the-art deep CNN at up to 18Hz. Field experiments demonstrate that the system's high responsiveness prevents collisions with unexpected dynamic obstacles up to a flight speed of 1.5m/s. In addition, we also demonstrate the capability of our visual navigation engine of fully autonomous indoor navigation on a 113m previously unseen path. To share our key findings with the embedded and robotics communities and foster further developments in autonomous nano-UAVs, we publicly release all our code, datasets, and trained networks

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    Autonomous aerial robot for high-speed search and intercept applications

    Get PDF
    In recent years, high-speed navigation and environment interaction in the context of aerial robotics has become a field of interest for several academic and industrial research studies. In particular, Search and Intercept (SaI) applications for aerial robots pose a compelling research area due to their potential usability in several environments. Nevertheless, SaI tasks involve a challenging development regarding sensory weight, onboard computation resources, actuation design, and algorithms for perception and control, among others. In this work, a fully autonomous aerial robot for high-speed object grasping has been proposed. As an additional subtask, our system is able to autonomously pierce balloons located in poles close to the surface. Our first contribution is the design of the aerial robot at an actuation and sensory level consisting of a novel gripper design with additional sensors enabling the robot to grasp objects at high speeds. The second contribution is a complete software framework consisting of perception, state estimation, motion planning, motion control, and mission control in order to rapidly and robustly perform the autonomous grasping mission. Our approach has been validated in a challenging international competition and has shown outstanding results, being able to autonomously search, follow, and grasp a moving object at 6 m/s in an outdoor environment.Agencia Estatal de InvestigaciónKhalifa Universit

    Autonomous Visual Navigation of a Quadrotor VTOL in complex and dense environments

    Get PDF
    This thesis presents a system design of a micro aerial vehicle platform, specifically a quadrotor, that is aimed at autonomous vision-based reactive obstacle avoidance in dense and complex environments. Most modern aerial system are incapable of autonomously navigating in environments with a high density of trees and bushes. The presented quadrotor design uses leading-edge technologies and inexpensive off-the-shelf components to build a system that presents a step forward in technologies aimed at overcoming the issues with dense and complex environments. Several major system requirements were met to make the design effective and safe. It had to be completely autonomous in standard operations and have a manual override function. It had to have its computational capability completely on-board along with vision processing ability. As such, all state estimation and visual guidance had to be performed on-board the vehicle, removing the need for remote connection which can easily fail in forest-like environments. The quadrotor had to be made from mostly off-the-shelf components to reduce cost and make it replicable. It also had to remain under 2kg to meet Australian commercial aerial vehicle regulations regarding licencing. In order to meet the system requirements, many design decisions were developed and altered as needed. The main body of the quadrotor platform was based on off-the-shelf hobby assemblies. A Pixhawk 2.1 was the flight controller used due to its open-source code and design which included all sensors needed for state estimation, has manual override for control, and control the motors. A leading-edge computational device called the NVIDIA Tegra TX2 was used for vision processing on the quadrotor. The NVIDIA Tegra TX2's embedded NVIDIA Graphics Processing Unit (GPU), is compact and consumes low amounts of power. It also is capable of estimating dense optical flow on the GPU at rates of 120Hz when using a camera that outputs grey-scale images at a resolution of 376x240. The vision processor is responsible for providing directional guidance to the on-board flight controller. A design decision during the project was to include a 3-axis gimbal to stabilise the camera. The quadrotor was shown to be able to hover and locally move both indoors and outdoors using the optical flow measurements. Optical flow measurements give a sense of velocity which can be integrated to get a position estimate, though it was susceptible to drift. The drift was compensated using a combination of recognisable targets and positioning systems such as GPS. The experimental data obtained during the project showed that the algorithms presented in this thesis are capable of performing reactive obstacle avoidance. The reactive obstacle avoidance experiments were performed in both simulation and in real world environments, including the dense forest-like environments. By fusing vehicle speed estimates with optical flow measurements, visible points in 3D space can have their distance estimated relative to the quadrotor. By projecting a 3D cylinder in the direction of travel onto the camera plane, the system can perform reactive obstacle avoidance by steering the cylinder (direction of travel) to a point with minimal interference. This system is intended to augment a point to point navigation system such that the quadrotor responds to fine obstacle that may have otherwise not been detected
    corecore