2,811 research outputs found

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    Low computational SLAM for an autonomous indoor aerial inspection vehicle

    Get PDF
    The past decade has seen an increase in the capability of small scale Unmanned Aerial Vehicle (UAV) systems, made possible through technological advancements in battery, computing and sensor miniaturisation technology. This has opened a new and rapidly growing branch of robotic research and has sparked the imagination of industry leading to new UAV based services, from the inspection of power-lines to remote police surveillance. Miniaturisation of UAVs have also made them small enough to be practically flown indoors. For example, the inspection of elevated areas in hazardous or damaged structures where the use of conventional ground-based robots are unsuitable. Sellafield Ltd, a nuclear reprocessing facility in the U.K. has many buildings that require frequent safety inspections. UAV inspections eliminate the current risk to personnel of radiation exposure and other hazards in tall structures where scaffolding or hoists are required. This project focused on the development of a UAV for the novel application of semi-autonomously navigating and inspecting these structures without the need for personnel to enter the building. Development exposed a significant gap in knowledge concerning indoor localisation, specifically Simultaneous Localisation and Mapping (SLAM) for use on-board UAVs. To lower the on-board processing requirements of SLAM, other UAV research groups have employed techniques such as off-board processing, reduced dimensionality or prior knowledge of the structure, techniques not suitable to this application given the unknown nature of the structures and the risk of radio-shadows. In this thesis a novel localisation algorithm, which enables real-time and threedimensional SLAM running solely on-board a computationally constrained UAV in heavily cluttered and unknown environments is proposed. The algorithm, based on the Iterative Closest Point (ICP) method utilising approximate nearest neighbour searches and point-cloud decimation to reduce the processing requirements has successfully been tested in environments similar to that specified by Sellafield Ltd

    Deep Drone Racing: From Simulation to Reality with Domain Randomization

    Full text link
    Dynamically changing environments, unreliable state estimation, and operation under severe resource constraints are fundamental challenges that limit the deployment of small autonomous drones. We address these challenges in the context of autonomous, vision-based drone racing in dynamic environments. A racing drone must traverse a track with possibly moving gates at high speed. We enable this functionality by combining the performance of a state-of-the-art planning and control system with the perceptual awareness of a convolutional neural network (CNN). The resulting modular system is both platform- and domain-independent: it is trained in simulation and deployed on a physical quadrotor without any fine-tuning. The abundance of simulated data, generated via domain randomization, makes our system robust to changes of illumination and gate appearance. To the best of our knowledge, our approach is the first to demonstrate zero-shot sim-to-real transfer on the task of agile drone flight. We extensively test the precision and robustness of our system, both in simulation and on a physical platform, and show significant improvements over the state of the art.Comment: Accepted as a Regular Paper to the IEEE Transactions on Robotics Journal. arXiv admin note: substantial text overlap with arXiv:1806.0854

    An Incrementally Deployed Swarm of MAVs for Localization UsingUltra-Wideband

    Get PDF
    Knowing the position of a moving target can be crucial, for example when localizing a first responder in an emergency scenario. In recent years, ultra wideband (UWB) has gained a lot of attention due to its localization accuracy. Unfortunately, UWB solutions often demand a manual setup in advance. This is tedious at best and not possible at all in environments with access restrictions (e.g., collapsed buildings). Thus, we propose a solution combining UWB with micro air vehicles (MAVs) to allow for UWB localization in a priori inaccessible environments. More precisely, MAVs equipped with UWB sensors are deployed incrementally into the environment. They localize themselves based on previously deployed MAVs and on-board odometry, before they land and enhance the UWB mesh network themselves. We tested this solution in a lab environment using a motion capture system for ground truth. Four MAVs were deployed as anchors and a fifth MAV was localized for over 80 second at a root mean square (RMS) of 0.206 m averaged over five experiments. For comparison, a setup with ideal anchor position knowledge came with 20 % lower RMS, and a setup purely based on odometry with 81 % higher RMS. The absolute scale of the error with the proposed approach is expected to be low enough for applications envisioned within the scope of this paper (e.g., the localization of a first responder) and thus considered a step towards flexible and accurate localization in a priori inaccessible, GNSS-denied environments.acceptedVersio

    Robust sound event detection in bioacoustic sensor networks

    Full text link
    Bioacoustic sensors, sometimes known as autonomous recording units (ARUs), can record sounds of wildlife over long periods of time in scalable and minimally invasive ways. Deriving per-species abundance estimates from these sensors requires detection, classification, and quantification of animal vocalizations as individual acoustic events. Yet, variability in ambient noise, both over time and across sensors, hinders the reliability of current automated systems for sound event detection (SED), such as convolutional neural networks (CNN) in the time-frequency domain. In this article, we develop, benchmark, and combine several machine listening techniques to improve the generalizability of SED models across heterogeneous acoustic environments. As a case study, we consider the problem of detecting avian flight calls from a ten-hour recording of nocturnal bird migration, recorded by a network of six ARUs in the presence of heterogeneous background noise. Starting from a CNN yielding state-of-the-art accuracy on this task, we introduce two noise adaptation techniques, respectively integrating short-term (60 milliseconds) and long-term (30 minutes) context. First, we apply per-channel energy normalization (PCEN) in the time-frequency domain, which applies short-term automatic gain control to every subband in the mel-frequency spectrogram. Secondly, we replace the last dense layer in the network by a context-adaptive neural network (CA-NN) layer. Combining them yields state-of-the-art results that are unmatched by artificial data augmentation alone. We release a pre-trained version of our best performing system under the name of BirdVoxDetect, a ready-to-use detector of avian flight calls in field recordings.Comment: 32 pages, in English. Submitted to PLOS ONE journal in February 2019; revised August 2019; published October 201

    Autonomous aerial robot for high-speed search and intercept applications

    Get PDF
    In recent years, high-speed navigation and environment interaction in the context of aerial robotics has become a field of interest for several academic and industrial research studies. In particular, Search and Intercept (SaI) applications for aerial robots pose a compelling research area due to their potential usability in several environments. Nevertheless, SaI tasks involve a challenging development regarding sensory weight, onboard computation resources, actuation design, and algorithms for perception and control, among others. In this work, a fully autonomous aerial robot for high-speed object grasping has been proposed. As an additional subtask, our system is able to autonomously pierce balloons located in poles close to the surface. Our first contribution is the design of the aerial robot at an actuation and sensory level consisting of a novel gripper design with additional sensors enabling the robot to grasp objects at high speeds. The second contribution is a complete software framework consisting of perception, state estimation, motion planning, motion control, and mission control in order to rapidly and robustly perform the autonomous grasping mission. Our approach has been validated in a challenging international competition and has shown outstanding results, being able to autonomously search, follow, and grasp a moving object at 6 m/s in an outdoor environment.Agencia Estatal de InvestigaciónKhalifa Universit
    corecore