5,346 research outputs found
Planar PØP: feature-less pose estimation with applications in UAV localization
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.We present a featureless pose estimation method that, in contrast to current Perspective-n-Point (PnP) approaches, it does not require n point correspondences to obtain the camera pose, allowing for pose estimation from natural shapes that do not necessarily have distinguished features like corners or intersecting edges. Instead of using n correspondences (e.g. extracted with a feature detector) we will use the raw polygonal representation of the observed shape and directly estimate the pose in the pose-space of the camera. This method compared with a general PnP method, does not require n point correspondences neither a priori knowledge of the object model (except the scale), which is registered with a picture taken from a known robot pose. Moreover, we achieve higher precision because all the information of the shape contour is used to minimize the area between the projected and the observed shape contours. To emphasize the non-use of n point correspondences between the projected template and observed contour shape, we call the method Planar PØP. The method is shown both in simulation and in a real application consisting on a UAV localization where comparisons with a precise ground-truth are provided.Peer ReviewedPostprint (author's final draft
Multi-agent pathfinding for unmanned aerial vehicles
Unmanned aerial vehicles (UAVs), commonly known as drones, have become more and
more prevalent in recent years. In particular, governmental organizations and companies
around the world are starting to research how UAVs can be used to perform tasks such
as package deliver, disaster investigation and surveillance of key assets such as pipelines,
railroads and bridges. NASA is currently in the early stages of developing an air traffic
control system specifically designed to manage UAV operations in low-altitude airspace.
Companies such as Amazon and Rakuten are testing large-scale drone deliver services in
the USA and Japan.
To perform these tasks, safe and conflict-free routes for concurrently operating UAVs must
be found. This can be done using multi-agent pathfinding (mapf) algorithms, although
the correct choice of algorithms is not clear. This is because many state of the art mapf
algorithms have only been tested in 2D space in maps with many obstacles, while UAVs
operate in 3D space in open maps with few obstacles. In addition, when an unexpected
event occurs in the airspace and UAVs are forced to deviate from their original routes
while inflight, new conflict-free routes must be found. Planning for these unexpected
events is commonly known as contingency planning. With manned aircraft, contingency
plans can be created in advance or on a case-by-case basis while inflight. The scale at
which UAVs operate, combined with the fact that unexpected events may occur anywhere
at any time make both advanced planning and planning on a case-by-case basis impossible.
Thus, a new approach is needed. Online multi-agent pathfinding (online mapf) looks to
be a promising solution. Online mapf utilizes traditional mapf algorithms to perform path
planning in real-time. That is, new routes for UAVs are found while inflight.
The primary contribution of this thesis is to present one possible approach to UAV
contingency planning using online multi-agent pathfinding algorithms, which can be used
as a baseline for future research and development. It also provides an in-depth overview
and analysis of offline mapf algorithms with the goal of determining which ones are likely
to perform best when applied to UAVs. Finally, to further this same goal, a few different
mapf algorithms are experimentally tested and analyzed
A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones
Fully-autonomous miniaturized robots (e.g., drones), with artificial
intelligence (AI) based visual navigation capabilities are extremely
challenging drivers of Internet-of-Things edge intelligence capabilities.
Visual navigation based on AI approaches, such as deep neural networks (DNNs)
are becoming pervasive for standard-size drones, but are considered out of
reach for nanodrones with size of a few cm. In this work, we
present the first (to the best of our knowledge) demonstration of a navigation
engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based
visual navigation. To achieve this goal we developed a complete methodology for
parallel execution of complex DNNs directly on-bard of resource-constrained
milliwatt-scale nodes. Our system is based on GAP8, a novel parallel
ultra-low-power computing platform, and a 27 g commercial, open-source
CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the
software mapping techniques that enable the state-of-the-art deep convolutional
neural network presented in [1] to be fully executed on-board within a strict 6
fps real-time constraint with no compromise in terms of flight results, while
all processing is done with only 64 mW on average. Our navigation engine is
flexible and can be used to span a wide performance range: at its peak
performance corner it achieves 18 fps while still consuming on average just
3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication
in the IEEE Internet of Things Journal (IEEE IOTJ
Smart environment monitoring through micro unmanned aerial vehicles
In recent years, the improvements of small-scale Unmanned Aerial Vehicles (UAVs) in terms of flight time, automatic control, and remote transmission are promoting the development of a wide range of practical applications. In aerial video surveillance, the monitoring of broad areas still has many challenges due to the achievement of different tasks in real-time, including mosaicking, change detection, and object detection. In this thesis work, a small-scale UAV based vision system to maintain regular surveillance over target areas is proposed. The system works in two modes. The first mode allows to monitor an area of interest by performing several flights. During the first flight, it creates an incremental geo-referenced mosaic of an area of interest and classifies all the known elements (e.g., persons) found on the ground by an improved Faster R-CNN architecture previously trained. In subsequent reconnaissance flights, the system searches for any changes (e.g., disappearance of persons) that may occur in the mosaic by a histogram equalization and RGB-Local Binary Pattern (RGB-LBP) based algorithm. If present, the mosaic is updated. The second mode, allows to perform a real-time classification by using, again, our improved Faster R-CNN model, useful for time-critical operations. Thanks to different design features, the system works in real-time and performs mosaicking and change detection tasks at low-altitude, thus allowing the classification even of small objects. The proposed system was tested by using the whole set of challenging video sequences contained in the UAV Mosaicking and Change Detection (UMCD) dataset and other public datasets. The evaluation of the system by well-known performance metrics has shown remarkable results in terms of mosaic creation and updating, as well as in terms of change detection and object detection
- …