140 research outputs found

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    An Open Source and Open Hardware Deep Learning-Powered Visual Navigation Engine for Autonomous Nano-UAVs

    Get PDF
    Nano-size unmanned aerial vehicles (UAVs), with few centimeters of diameter and sub-10 Watts of total power budget, have so far been considered incapable of running sophisticated visual-based autonomous navigation software without external aid from base-stations, ad-hoc local positioning infrastructure, and powerful external computation servers. In this work, we present what is, to the best of our knowledge, the first 27g nano-UAV system able to run aboard an end-to-end, closed-loop visual pipeline for autonomous navigation based on a state-of-the-art deep-learning algorithm, built upon the open-source CrazyFlie 2.0 nano-quadrotor. Our visual navigation engine is enabled by the combination of an ultra-low power computing device (the GAP8 system-on-chip) with a novel methodology for the deployment of deep convolutional neural networks (CNNs). We enable onboard real-time execution of a state-of-the-art deep CNN at up to 18Hz. Field experiments demonstrate that the system's high responsiveness prevents collisions with unexpected dynamic obstacles up to a flight speed of 1.5m/s. In addition, we also demonstrate the capability of our visual navigation engine of fully autonomous indoor navigation on a 113m previously unseen path. To share our key findings with the embedded and robotics communities and foster further developments in autonomous nano-UAVs, we publicly release all our code, datasets, and trained networks

    An Incrementally Deployed Swarm of MAVs for Localization UsingUltra-Wideband

    Get PDF
    Knowing the position of a moving target can be crucial, for example when localizing a first responder in an emergency scenario. In recent years, ultra wideband (UWB) has gained a lot of attention due to its localization accuracy. Unfortunately, UWB solutions often demand a manual setup in advance. This is tedious at best and not possible at all in environments with access restrictions (e.g., collapsed buildings). Thus, we propose a solution combining UWB with micro air vehicles (MAVs) to allow for UWB localization in a priori inaccessible environments. More precisely, MAVs equipped with UWB sensors are deployed incrementally into the environment. They localize themselves based on previously deployed MAVs and on-board odometry, before they land and enhance the UWB mesh network themselves. We tested this solution in a lab environment using a motion capture system for ground truth. Four MAVs were deployed as anchors and a fifth MAV was localized for over 80 second at a root mean square (RMS) of 0.206 m averaged over five experiments. For comparison, a setup with ideal anchor position knowledge came with 20 % lower RMS, and a setup purely based on odometry with 81 % higher RMS. The absolute scale of the error with the proposed approach is expected to be low enough for applications envisioned within the scope of this paper (e.g., the localization of a first responder) and thus considered a step towards flexible and accurate localization in a priori inaccessible, GNSS-denied environments.acceptedVersio

    Artificial Intelligence Applications for Drones Navigation in GPS-denied or degraded Environments

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Enabling technologies for precise aerial manufacturing with unmanned aerial vehicles

    Get PDF
    The construction industry is currently experiencing a revolution with automation techniques such as additive manufacturing and robot-enabled construction. Additive Manufacturing (AM) is a key technology that can o er productivity improvement in the construction industry by means of o -site prefabrication and on-site construction with automated systems. The key bene t is that building elements can be fabricated with less materials and higher design freedom compared to traditional manual methods. O -site prefabrication with AM has been investigated for some time already, but it has limitations in terms of logistical issues of components transportation and due to its lack of design exibility on-site. On-site construction with automated systems, such as static gantry systems and mobile ground robots performing AM tasks, can o er additional bene ts over o -site prefabrication, but it needs further research before it will become practical and economical. Ground-based automated construction systems also have the limitation that they cannot extend the construction envelope beyond their physical size. The solution of using aerial robots to liberate the process from the constrained construction envelope has been suggested, albeit with technological challenges including precision of operation, uncertainty in environmental interaction and energy e ciency. This thesis investigates methods of precise manufacturing with aerial robots. In particular, this work focuses on stabilisation mechanisms and origami-based structural elements that allow aerial robots to operate in challenging environments. An integrated aerial self-aligning delta manipulator has been utilised to increase the positioning accuracy of the aerial robots, and a Material Extrusion (ME) process has been developed for Aerial Additive Manufacturing (AAM). A 28-layer tower has been additively manufactured by aerial robots to demonstrate the feasibility of AAM. Rotorigami and a bioinspired landing mechanism demonstrate their abilities to overcome uncertainty in environmental interaction with impact protection capabilities and improved robustness for UAV. Design principles using tensile anchoring methods have been explored, enabling low-power operation and explores possibility of low-power aerial stabilisation. The results demonstrate that precise aerial manufacturing needs to consider not only just the robotic aspects, such as ight control algorithms and mechatronics, but also material behaviour and environmental interaction as factors for its success.Open Acces

    Ffau—framework for fully autonomous uavs

    Get PDF
    Nr. 024539 (POCI-01-0247-FEDER-024539) under grant agreement No 783221 UID/EEA/00066/2019Unmanned Aerial Vehicles (UAVs), although hardly a new technology, have recently gained a prominent role in many industries being widely used not only among enthusiastic consumers, but also in high demanding professional situations, and will have a massive societal impact over the coming years. However, the operation of UAVs is fraught with serious safety risks, such as collisions with dynamic obstacles (birds, other UAVs, or randomly thrown objects). These collision scenarios are complex to analyze in real-time, sometimes being computationally impossible to solve with existing State of the Art (SoA) algorithms, making the use of UAVs an operational hazard and therefore significantly reducing their commercial applicability in urban environments. In this work, a conceptual framework for both stand-alone and swarm (networked) UAVs is introduced, with a focus on the architectural requirements of the collision avoidance subsystem to achieve acceptable levels of safety and reliability. The SoA principles for collision avoidance against stationary objects are reviewed and a novel approach is described, using deep learning techniques to solve the computational intensive problem of real-time collision avoidance with dynamic objects. The proposed framework includes a web-interface allowing the full control of UAVs as remote clients with a supervisor cloud-based platform. The feasibility of the proposed approach was demonstrated through experimental tests using a UAV, developed from scratch using the proposed framework. Test flight results are presented for an autonomous UAV monitored from multiple countries across the world.publishersversionpublishe

    Fully Onboard AI-Powered Human-Drone Pose Estimation on Ultralow-Power Autonomous Flying Nano-UAVs

    Get PDF
    Many emerging applications of nano-sized unmanned aerial vehicles (UAVs), with a few cm(2) form-factor, revolve around safely interacting with humans in complex scenarios, for example, monitoring their activities or looking after people needing care. Such sophisticated autonomous functionality must be achieved while dealing with severe constraints in payload, battery, and power budget (similar to 100 mW). In this work, we attack a complex task going from perception to control: to estimate and maintain the nano-UAV's relative 3-D pose with respect to a person while they freely move in the environment-a task that, to the best of our knowledge, has never previously been targeted with fully onboard computation on a nano-sized UAV. Our approach is centered around a novel vision-based deep neural network (DNN), called Frontnet, designed for deployment on top of a parallel ultra-low power (PULP) processor aboard a nano-UAV. We present a vertically integrated approach starting from the DNN model design, training, and dataset augmentation down to 8-bit quantization and deployment in-field. PULP-Frontnet can operate in real-time (up to 135 frame/s), consuming less than 87 mW for processing at peak throughput and down to 0.43 mJ/frame in the most energy-efficient operating point. Field experiments demonstrate a closed-loop top-notch autonomous navigation capability, with a tiny 27-g Crazyflie 2.1 nano-UAV. Compared against an ideal sensing setup, onboard pose inference yields excellent drone behavior in terms of median absolute errors, such as positional (onboard: 41 cm, ideal: 26 cm) and angular (onboard: 3.7 degrees, ideal: 4.1 degrees). We publicly release videos and the source code of our work

    Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs

    Get PDF
    Many emerging applications of nano-sized unmanned aerial vehicles (UAVs), with a few form-factor, revolve around safely interacting with humans in complex scenarios, for example, monitoring their activities or looking after people needing care. Such sophisticated autonomous functionality must be achieved while dealing with severe constraints in payload, battery, and power budget ( 100). In this work, we attack a complex task going from perception to control: to estimate and maintain the nano-UAV’s relative 3D pose with respect to a person while they freely move in the environment – a task that, to the best of our knowledge, has never previously been targeted with fully onboard computation on a nano-sized UAV. Our approach is centered around a novel vision-based deep neural network (DNN), called PULP-Frontnet, designed for deployment on top of a parallel ultra-low-power (PULP) processor aboard a nano-UAV. We present a vertically integrated approach starting from the DNN model design, training, and dataset augmentation down to 8-bit quantization and deployment in-field. PULP-Frontnet can operate in real-time (up to 135frame/), consuming less than 87 for processing at peak throughput and down to 0.43/frame in the most energy-efficient operating point. Field experiments demonstrate a closed-loop top-notch autonomous navigation capability, with a tiny 27-grams Crazyflie 2.1 nano-UAV. Compared against an ideal sensing setup, onboard pose inference yields excellent drone behavior in terms of median absolute errors, such as positional (onboard: 41, ideal: 26) and angular (onboard: 3.7, ideal: 4.1). We publicly release videos and the source code of our work
    • …
    corecore