493 research outputs found
A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones
Fully-autonomous miniaturized robots (e.g., drones), with artificial
intelligence (AI) based visual navigation capabilities are extremely
challenging drivers of Internet-of-Things edge intelligence capabilities.
Visual navigation based on AI approaches, such as deep neural networks (DNNs)
are becoming pervasive for standard-size drones, but are considered out of
reach for nanodrones with size of a few cm. In this work, we
present the first (to the best of our knowledge) demonstration of a navigation
engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based
visual navigation. To achieve this goal we developed a complete methodology for
parallel execution of complex DNNs directly on-bard of resource-constrained
milliwatt-scale nodes. Our system is based on GAP8, a novel parallel
ultra-low-power computing platform, and a 27 g commercial, open-source
CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the
software mapping techniques that enable the state-of-the-art deep convolutional
neural network presented in [1] to be fully executed on-board within a strict 6
fps real-time constraint with no compromise in terms of flight results, while
all processing is done with only 64 mW on average. Our navigation engine is
flexible and can be used to span a wide performance range: at its peak
performance corner it achieves 18 fps while still consuming on average just
3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication
in the IEEE Internet of Things Journal (IEEE IOTJ
Nonlinear Model Predictive Control for Multi-Micro Aerial Vehicle Robust Collision Avoidance
Multiple multirotor Micro Aerial Vehicles sharing the same airspace require a
reliable and robust collision avoidance technique. In this paper we address the
problem of multi-MAV reactive collision avoidance. A model-based controller is
employed to achieve simultaneously reference trajectory tracking and collision
avoidance. Moreover, we also account for the uncertainty of the state estimator
and the other agents position and velocity uncertainties to achieve a higher
degree of robustness. The proposed approach is decentralized, does not require
collision-free reference trajectory and accounts for the full MAV dynamics. We
validated our approach in simulation and experimentally.Comment: Video available on: https://www.youtube.com/watch?v=Ot76i9p2ZZo&t=40
Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision
In order to improve usability and safety, modern unmanned aerial vehicles
(UAVs) are equipped with sensors to monitor the environment, such as
laser-scanners and cameras. One important aspect in this monitoring process is
to detect obstacles in the flight path in order to avoid collisions. Since a
large number of consumer UAVs suffer from tight weight and power constraints,
our work focuses on obstacle avoidance based on a lightweight stereo camera
setup. We use disparity maps, which are computed from the camera images, to
locate obstacles and to automatically steer the UAV around them. For disparity
map computation we optimize the well-known semi-global matching (SGM) approach
for the deployment on an embedded FPGA. The disparity maps are then converted
into simpler representations, the so called U-/V-Maps, which are used for
obstacle detection. Obstacle avoidance is based on a reactive approach which
finds the shortest path around the obstacles as soon as they have a critical
distance to the UAV. One of the fundamental goals of our work was the reduction
of development costs by closing the gap between application development and
hardware optimization. Hence, we aimed at using high-level synthesis (HLS) for
porting our algorithms, which are written in C/C++, to the embedded FPGA. We
evaluated our implementation of the disparity estimation on the KITTI Stereo
2015 benchmark. The integrity of the overall realtime reactive obstacle
avoidance algorithm has been evaluated by using Hardware-in-the-Loop testing in
conjunction with two flight simulators.Comment: Accepted in the International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Scienc
Optimal Multi-UAV Trajectory Planning for Filming Applications
Teams of multiple Unmanned Aerial Vehicles (UAVs) can be used to record large-scale
outdoor scenarios and complementary views of several action points as a promising
system for cinematic video recording. Generating the trajectories of the UAVs plays
a key role, as it should be ensured that they comply with requirements for system
dynamics, smoothness, and safety. The rise of numerical methods for nonlinear
optimization is finding a
ourishing field in optimization-based approaches to multi-
UAV trajectory planning. In particular, these methods are rather promising for
video recording applications, as they enable multiple constraints and objectives to
be formulated, such as trajectory smoothness, compliance with UAV and camera
dynamics, avoidance of obstacles and inter-UAV con
icts, and mutual UAV visibility.
The main objective of this thesis is to plan online trajectories for multi-UAV teams in
video applications, formulating novel optimization problems and solving them in real
time.
The thesis begins by presenting a framework for carrying out autonomous cinematography
missions with a team of UAVs. This framework enables media directors
to design missions involving different types of shots with one or multiple cameras,
running sequentially or concurrently. Second, the thesis proposes a novel non-linear
formulation for the challenging problem of computing optimal multi-UAV trajectories
for cinematography, integrating UAV dynamics and collision avoidance constraints,
together with cinematographic aspects such as smoothness, gimbal mechanical limits,
and mutual camera visibility. Lastly, the thesis describes a method for autonomous
aerial recording with distributed lighting by a team of UAVs. The multi-UAV trajectory
optimization problem is decoupled into two steps in order to tackle non-linear cinematographic aspects and obstacle avoidance at separate stages. This allows the
trajectory planner to perform in real time and to react online to changes in dynamic
environments.
It is important to note that all the methods in the thesis have been validated
by means of extensive simulations and field experiments. Moreover, all the software
components have been developed as open source.Los equipos de vehículos aéreos no tripulados (UAV) son sistemas prometedores para grabar
eventos cinematográficos, en escenarios exteriores de grandes dimensiones difíciles de cubrir
o para tomar vistas complementarias de diferentes puntos de acción. La generación de
trayectorias para este tipo de vehículos desempeña un papel fundamental, ya que debe
garantizarse que se cumplan requisitos dinámicos, de suavidad y de seguridad.
Los enfoques basados en la optimización para la planificación de trayectorias de múltiples
UAVs se pueden ver beneficiados por el auge de los métodos numéricos para la resolución de
problemas de optimización no lineales. En particular, estos métodos son bastante
prometedores para las aplicaciones de grabación de vídeo, ya que permiten formular múltiples
restricciones y objetivos, como la suavidad de la trayectoria, el cumplimiento de la dinámica
del UAV y de la cámara, la evitación de obstáculos y de conflictos entre UAVs, y la visibilidad
mutua.
El objetivo principal de esta tesis es planificar trayectorias para equipos multi-UAV en
aplicaciones de vídeo, formulando novedosos problemas de optimización y resolviéndolos en
tiempo real.
La tesis comienza presentando un marco de trabajo para la realización de misiones
cinematográficas autónomas con un equipo de UAVs. Este marco permite a los directores de
medios de comunicación diseñar misiones que incluyan diferentes tipos de tomas con una o
varias cámaras, ejecutadas de forma secuencial o concurrente. En segundo lugar, la tesis
propone una novedosa formulación no lineal para el difícil problema de calcular las
trayectorias óptimas de los vehículos aéreos no tripulados en cinematografía, integrando en el
problema la dinámica de los UAVs y las restricciones para evitar colisiones, junto con aspectos
cinematográficos como la suavidad, los límites mecánicos del cardán y la visibilidad mutua de
las cámaras. Por último, la tesis describe un método de grabación aérea autónoma con
iluminación distribuida por un equipo de UAVs. El problema de optimización de trayectorias se
desacopla en dos pasos para abordar los aspectos cinematográficos no lineales y la evitación
de obstáculos en etapas separadas. Esto permite al planificador de trayectorias actuar en
tiempo real y reaccionar en línea a los cambios en los entornos dinámicos.
Es importante señalar que todos los métodos de la tesis han sido validados mediante extensas
simulaciones y experimentos de campo. Además, todos los componentes del software se han
desarrollado como código abierto
Autonomous Obstacle Collision Avoidance System for UAVs in rescue operations
The Unmanned Aerial Vehicles (UAV) and its applications are growing for both civilian and
military purposes. The operability of an UAV proved that some tasks and operations can be
done easily and at a good cost-efficiency ratio.
Nowadays, an UAV can perform autonomous tasks, by using waypoint mission navigation
using a GPS sensor. These autonomous tasks are also called missions. It is very useful to certain
UAV applications, such as meteorology, vigilance systems, agriculture, environment mapping
and search and rescue operations.
One of the biggest problems that an UAV faces is the possibility of collision with other objects
in the flight area. This can cause damage to surrounding area structures, humans or the UAV
itself. To avoid this, an algorithm was developed and implemented in order to prevent UAV
collision with other objects.
“Sense and Avoid” algorithm was developed as a system for UAVs to avoid objects in collision
course. This algorithm uses a laser distance sensor called LiDAR (Light Detection and
Ranging), to detect objects facing the UAV in mid-flights. This light sensor is connected to an
on-board hardware, Pixhawk’s flight controller, which interfaces its communications with
another hardware: Raspberry Pi. Communications between Ground Control Station or RC
controller are made via Wi-Fi telemetry or Radio telemetry.
“Sense and Avoid” algorithm has two different modes: “Brake” and “Avoid and Continue”.
These modes operate in different controlling methods. “Brake” mode is used to prevent UAV
collisions with objects when controlled by a human operator that is using a RC controller.
“Avoid and Continue” mode works on UAV’s autonomous modes, avoiding collision with
objects in sight and proceeding with the ongoing mission.
In this dissertation, some tests were made in order to evaluate the “Sense and Avoid”
algorithm’s overall performance. These tests were done in two different environments: A 3D
simulated environment and a real outdoor environment. Both modes worked successfully on a
simulated 3D environment, and “Brake” mode on a real outdoor, proving its concepts.Os veículos aéreos não tripulados (UAV) e as suas aplicações estão cada vez mais a ser
utilizadas para fins civis e militares. A operacionalidade de um UAV provou que algumas
tarefas e operações podem ser feitas facilmente e com uma boa relação de custo-benefício. Hoje
em dia, um UAV pode executar tarefas autonomamente, usando navegação por waypoints e um
sensor de GPS. Essas tarefas autónomas também são designadas de missões. As missões
autónomas poderão ser usadas para diversos propósitos, tais como na meteorologia, sistemas
de vigilância, agricultura, mapeamento de áreas e operações de busca e salvamento. Um dos
maiores problemas que um UAV enfrenta é a possibilidade de colisão com outros objetos na
área, podendo causar danos às estruturas envolventes, aos seres humanos ou ao próprio UAV.
Para evitar tais ocorrências, foi desenvolvido e implementado um algoritmo para evitar a colisão
de um UAV com outros objetos.
O algoritmo "Sense and Avoid" foi desenvolvido como um sistema para UAVs de modo a evitar
objetos em rota de colisão. Este algoritmo utiliza um sensor de distância a laser chamado
LiDAR (Light Detection and Ranging), para detetar objetos que estão em frente do UAV. Este
sensor é ligado a um hardware de bordo, a controladora de voo Pixhawk, que realiza as suas
comunicações com outro hardware complementar: o Raspberry Pi. As comunicações entre a
estação de controlo ou o operador de comando RC são feitas via telemetria Wi-Fi ou telemetria
por rádio. O algoritmo "Sense and Avoid" tem dois modos diferentes: o modo "Brake" e modo
"Avoid and Continue". Estes modos operam em diferentes métodos de controlo do UAV. O
modo "Brake" é usado para evitar colisões com objetos quando controlado via controlador RC
por um operador humano. O modo "Avoid and Continue" funciona nos modos de voo
autónomos do UAV, evitando colisões com objetos à vista e prosseguindo com a missão em
curso. Nesta dissertação, alguns testes foram realizados para avaliar o desempenho geral do
algoritmo "Sense and Avoid". Estes testes foram realizados em dois ambientes diferentes: um
ambiente de simulação em 3D e um ambiente ao ar livre. Ambos os modos obtiveram
funcionaram com sucesso no ambiente de simulação 3D e o mode “Brake” no ambiente real,
provando os seus conceitos
- …