454 research outputs found
UAV Autonomous Localization using Macro-Features Matching with a CAD Model
Research in the field of autonomous Unmanned Aerial Vehicles (UAVs) has
significantly advanced in recent years, mainly due to their relevance in a
large variety of commercial, industrial, and military applications. However,
UAV navigation in GPS-denied environments continues to be a challenging problem
that has been tackled in recent research through sensor-based approaches. This
paper presents a novel offline, portable, real-time in-door UAV localization
technique that relies on macro-feature detection and matching. The proposed
system leverages the support of machine learning, traditional computer vision
techniques, and pre-existing knowledge of the environment. The main
contribution of this work is the real-time creation of a macro-feature
description vector from the UAV captured images which are simultaneously
matched with an offline pre-existing vector from a Computer-Aided Design (CAD)
model. This results in a quick UAV localization within the CAD model. The
effectiveness and accuracy of the proposed system were evaluated through
simulations and experimental prototype implementation. Final results reveal the
algorithm's low computational burden as well as its ease of deployment in
GPS-denied environments
Autonomous Localization Of A Uav In A 3d Cad Model
This thesis presents a novel method of indoor localization and autonomous navigation of Unmanned Aerial Vehicles(UAVs) within a building, given a prebuilt Computer Aided Design(CAD) model of the building. The proposed system is novel in that it leverages the support of machine learning and traditional computer vision techniques to provide a robust method of localizing and navigating a drone autonomously in indoor and GPS denied environments leveraging preexisting knowledge of the environment. The goal of this work is to devise a method to enable a UAV to deduce its current pose within a CAD model that is fast and accurate while also maintaining efficient use of resources. A 3-Dimensional CAD model of the building to be navigated through is provided as input to the system along with the required goal position. Initially, the UAV has no idea of its location within the building. The system, comprising a stereo camera system and an Inertial Measurement Unit(IMU) as its sensors, then generates a globally consistent map of its surroundings using a Simultaneous Localization and Mapping (SLAM) algorithm. In addition to the map, it also stores spatially correlated 3D features. These 3D features are then used to generate correspondences between the SLAM map and the 3D CAD model. The correspondences are then used to generate a transformation between the SLAM map and the 3D CAD model, thus effectively localizing the UAV in the 3D CAD model. Our method has been tested to successfully localize the UAV in the test building in an average of 15 seconds in the different scenarios tested contingent upon the abundance of target features in the observed data. Due to the absence of a motion capture system, the results have been verified by the placement of tags on the ground at strategic known locations in the building and measuring the error in the projection of the current UAV location on the ground with the tag
Low computational SLAM for an autonomous indoor aerial inspection vehicle
The past decade has seen an increase in the capability of small scale Unmanned
Aerial Vehicle (UAV) systems, made possible through technological advancements
in battery, computing and sensor miniaturisation technology. This has opened a new
and rapidly growing branch of robotic research and has sparked the imagination of
industry leading to new UAV based services, from the inspection of power-lines to
remote police surveillance.
Miniaturisation of UAVs have also made them small enough to be practically flown
indoors. For example, the inspection of elevated areas in hazardous or damaged
structures where the use of conventional ground-based robots are unsuitable. Sellafield
Ltd, a nuclear reprocessing facility in the U.K. has many buildings that require
frequent safety inspections. UAV inspections eliminate the current risk to personnel
of radiation exposure and other hazards in tall structures where scaffolding or hoists
are required.
This project focused on the development of a UAV for the novel application of
semi-autonomously navigating and inspecting these structures without the need for
personnel to enter the building. Development exposed a significant gap in knowledge
concerning indoor localisation, specifically Simultaneous Localisation and Mapping
(SLAM) for use on-board UAVs. To lower the on-board processing requirements
of SLAM, other UAV research groups have employed techniques such as off-board
processing, reduced dimensionality or prior knowledge of the structure, techniques
not suitable to this application given the unknown nature of the structures and the
risk of radio-shadows.
In this thesis a novel localisation algorithm, which enables real-time and threedimensional
SLAM running solely on-board a computationally constrained UAV in
heavily cluttered and unknown environments is proposed. The algorithm, based
on the Iterative Closest Point (ICP) method utilising approximate nearest neighbour
searches and point-cloud decimation to reduce the processing requirements has
successfully been tested in environments similar to that specified by Sellafield Ltd
Improved deep depth estimation for environments with sparse visual cues
Most deep learning-based depth estimation models that learn scene structure self-supervised from monocular video base their estimation on visual cues such as vanishing points. In the established depth estimation benchmarks depicting, for example, street navigation or indoor offices, these cues can be found consistently, which enables neural networks to predict depth maps from single images. In this work, we are addressing the challenge of depth estimation from a real-world bird’s-eye perspective in an industry environment which contains, conditioned by its special geometry, a minimal amount of visual cues and, hence, requires incorporation of the temporal domain for structure from motion estimation. To enable the system to incorporate structure from motion from pixel translation when facing context-sparse, i.e., visual cue sparse, scenery, we propose a novel architecture built upon the structure from motion learner, which uses temporal pairs of jointly unrotated and stacked images for depth prediction. In order to increase the overall performance and to avoid blurred depth edges that lie in between the edges of the two input images, we integrate a geometric consistency loss into our pipeline. We assess the model’s ability to learn structure from motion by introducing a novel industry dataset whose perspective, orthogonal to the floor, contains only minimal visual cues. Through the evaluation with ground truth depth, we show that our proposed method outperforms the state of the art in difficult context-sparse environments.Peer reviewe
A Multilevel Architecture for Autonomous UAVs
In this paper, a multilevel architecture able to interface an on-board computer with a generic UAV flight controller and its radio receiver is proposed. The computer board exploits the same standard communication protocol of UAV flight controllers and can easily access additional data, such as: (i) inertial sensor measurements coming from a multi-sensor board; (ii) global navigation satellite system (GNSS) coordinates; (iii) streaming video from one or more cameras; and (iv) operator commands from the remote control. In specific operating scenarios, the proposed platform is able to act as a “cyber pilot” which replaces the role of a human UAV operator, thus simplifying the development of complex tasks such as those based on computer vision and artificial intelligence (AI) algorithms which are typically employed in autonomous flight operations
A Low Cost UWB Based Solution for Direct Georeferencing UAV Photogrammetry
Thanks to their flexibility and availability at reduced costs, Unmanned Aerial Vehicles (UAVs) have been recently used on a wide range of applications and conditions. Among these, they can play an important role in monitoring critical events (e.g., disaster monitoring) when the presence of humans close to the scene shall be avoided for safety reasons, in precision farming and surveying. Despite the very large number of possible applications, their usage is mainly limited by the availability of the Global Navigation Satellite System (GNSS) in the considered environment: indeed, GNSS is of fundamental importance in order to reduce positioning error derived by the drift of (low-cost) Micro-Electro-Mechanical Systems (MEMS) internal sensors. In order to make the usage of UAVs possible even in critical environments (when GNSS is not available or not reliable, e.g., close to mountains or in city centers, close to high buildings), this paper considers the use of a low cost Ultra Wide-Band (UWB) system as the positioning method. Furthermore, assuming the use of a calibrated camera, UWB positioning is exploited to achieve metric reconstruction on a local coordinate system. Once the georeferenced position of at least three points (e.g., positions of three UWB devices) is known, then georeferencing can be obtained, as well. The proposed approach is validated on a specific case study, the reconstruction of the façade of a university building. Average error on 90 check points distributed over the building façade, obtained by georeferencing by means of the georeferenced positions of four UWB devices at fixed positions, is 0.29 m. For comparison, the average error obtained by using four ground control points is 0.18 m
The MRS UAV System: Pushing the Frontiers of Reproducible Research, Real-world Deployment, and Education with Autonomous Unmanned Aerial Vehicles
We present a multirotor Unmanned Aerial Vehicle control (UAV) and estimation
system for supporting replicable research through realistic simulations and
real-world experiments. We propose a unique multi-frame localization paradigm
for estimating the states of a UAV in various frames of reference using
multiple sensors simultaneously. The system enables complex missions in GNSS
and GNSS-denied environments, including outdoor-indoor transitions and the
execution of redundant estimators for backing up unreliable localization
sources. Two feedback control designs are presented: one for precise and
aggressive maneuvers, and the other for stable and smooth flight with a noisy
state estimate. The proposed control and estimation pipeline are constructed
without using the Euler/Tait-Bryan angle representation of orientation in 3D.
Instead, we rely on rotation matrices and a novel heading-based convention to
represent the one free rotational degree-of-freedom in 3D of a standard
multirotor helicopter. We provide an actively maintained and well-documented
open-source implementation, including realistic simulation of UAV, sensors, and
localization systems. The proposed system is the product of years of applied
research on multi-robot systems, aerial swarms, aerial manipulation, motion
planning, and remote sensing. All our results have been supported by real-world
system deployment that shaped the system into the form presented here. In
addition, the system was utilized during the participation of our team from the
CTU in Prague in the prestigious MBZIRC 2017 and 2020 robotics competitions,
and also in the DARPA SubT challenge. Each time, our team was able to secure
top places among the best competitors from all over the world. On each
occasion, the challenges has motivated the team to improve the system and to
gain a great amount of high-quality experience within tight deadlines.Comment: 28 pages, 20 figures, submitted to Journal of Intelligent & Robotic
Systems (JINT), for the provided open-source software see
http://github.com/ctu-mr
Smooth Coverage Path Planning for UAVs with Model Predictive Control Trajectory Tracking
Within the Industry 4.0 ecosystem, Inspection Robotics is one fundamental technology to speed up monitoring processes and obtain good accuracy and performance of the inspections while avoiding possible safety issues for human personnel. This manuscript investigates the robotics inspection of areas and surfaces employing Unmanned Aerial Vehicles (UAVs). The contribution starts by addressing the problem of coverage path planning and proposes a smoothing approach intended to reduce both flight time and memory consumption to store the target navigation path. Evaluation tests are conducted on a quadrotor equipped with a Model Predictive Control (MPC) policy and a Simultaneous Localization and Mapping (SLAM) algorithm to localize the UAV in the environment
Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments
This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.Peer ReviewedPostprint (published version
- …