542 research outputs found
Real-time UAV Complex Missions Leveraging Self-Adaptive Controller with Elastic Structure
The expectation of unmanned air vehicles (UAVs) pushes the operation
environment to narrow spaces, where the systems may fly very close to an object
and perform an interaction. This phase brings the variation in UAV dynamics:
thrust and drag coefficient of the propellers might change under different
proximity. At the same time, UAVs may need to operate under external
disturbances to follow time-based trajectories. Under these challenging
conditions, a standard controller approach may not handle all missions with a
fixed structure, where there may be a need to adjust its parameters for each
different case. With these motivations, practical implementation and evaluation
of an autonomous controller applied to a quadrotor UAV are proposed in this
work. A self-adaptive controller based on a composite control scheme where a
combination of sliding mode control (SMC) and evolving neuro-fuzzy control is
used. The parameter vector of the neuro-fuzzy controller is updated adaptively
based on the sliding surface of the SMC. The autonomous controller possesses a
new elastic structure, where the number of fuzzy rules keeps growing or get
pruned based on bias and variance balance. The interaction of the UAV is
experimentally evaluated in real time considering the ground effect, ceiling
effect and flight through a strong fan-generated wind while following
time-based trajectories.Comment: 18 page
Enabling Multi-LiDAR Sensing in GNSS-Denied Environments: SLAM Dataset, Benchmark, and UAV Tracking with LiDAR-as-a-camera
The rise of Light Detection and Ranging (LiDAR) sensors has profoundly impacted industries ranging from automotive to urban planning. As these sensors become increasingly affordable and compact, their applications are diversifying, driving precision, and innovation. This thesis delves into LiDAR's advancements in autonomous robotic systems, with a focus on its role in simultaneous localization and mapping (SLAM) methodologies and LiDAR as a camera-based tracking for Unmanned Aerial Vehicles (UAV).
Our contributions span two primary domains: the Multi-Modal LiDAR SLAM Benchmark, and the LiDAR-as-a-camera UAV Tracking. In the former, we have expanded our previous multi-modal LiDAR dataset by adding more data sequences from various scenarios. In contrast to the previous dataset, we employ different ground truth-generating approaches. We propose a new multi-modal multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. Additionally, we also supplement our data with new open road sequences with GNSS-RTK. This enriched dataset, supported by high-resolution LiDAR, provides detailed insights through an evaluation of ten configurations, pairing diverse LiDAR sensors with state-of-the-art SLAM algorithms. In the latter contribution, we leverage a custom YOLOv5 model trained on panoramic low-resolution images from LiDAR reflectivity (LiDAR-as-a-camera) to detect UAVs, demonstrating the superiority of this approach over point cloud or image-only methods. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform.
Overall, our research underscores the transformative potential of integrating advanced LiDAR sensors with autonomous robotics. By bridging the gaps between different technological approaches, we pave the way for more versatile and efficient applications in the future
Recommended from our members
Automatic obstacle avoidance of quadrotor UAV via CNN-based learning
In this paper, a CNN-based learning scheme is proposed to enable a quadrotor unmanned aerial vehicle (UAV) to avoid obstacles automatically in unknown and unstructured environments. In order to reduce the decision delay and to improve the robustness for the UAV, a two-stage end-to-end obstacle avoidance architecture is designed, where a forward-facing monocular camera is used only. In the first stage, a convolutional neural network (CNN)-based model is adopted as the prediction mechanism. Utilizing three effective operations, namely depthwise convolution, group convolution and channel split, the model predicts the steering angle and the collision probability simultaneously. In the second stage, the control mechanism maps the steering angle to an instruction that changes the yaw angle of the UAV. Consequently, when the UAV encounters an obstacle, it can avoid collision by steering automatically. Meanwhile, the collision probability is mapped as a forward speed to maintain the flight or stop going forward. The presented automatic obstacle avoidance scheme of quadrotor UAV is verified by several indoor/outdoor tests, where the feasibility and efficacy have been demonstrated clearly. The novelties of the method lie in its low sensor requirement, light-weight network structure, strong learning ability and environmental adaptability
Vision-Based navigation system for unmanned aerial vehicles
Mención Internacional en el tÃtulo de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles
(UAVs) with a robust navigation system; in order to allow the UAVs to perform
complex tasks autonomously and in real-time. The proposed algorithms deal with
solving the navigation problem for outdoor as well as indoor environments, mainly
based on visual information that is captured by monocular cameras. In addition,
this dissertation presents the advantages of using the visual sensors as the main
source of data, or complementing other sensors in providing useful information; in
order to improve the accuracy and the robustness of the sensing purposes.
The dissertation mainly covers several research topics based on computer vision
techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of
the UAV. This algorithm is based on the combination of SIFT detector and FREAK
descriptor; which maintains the performance of the feature points matching and decreases
the computational time. Thereafter, the pose estimation problem is solved
based on the decomposition of the world-to-frame and frame-to-frame homographies.
(II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to
sense and detect the frontal obstacles that are situated in its path. The detection
algorithm mimics the human behaviors for detecting the approaching obstacles; by
analyzing the size changes of the detected feature points, combined with the expansion
ratios of the convex hull constructed around the detected feature points
from consecutive frames. Then, by comparing the area ratio of the obstacle and the
position of the UAV, the method decides if the detected obstacle may cause a collision.
Finally, the algorithm extracts the collision-free zones around the obstacle,
and combining with the tracked waypoints, the UAV performs the avoidance maneuver.
(III) Navigation Guidance, which generates the waypoints to determine
the flight path based on environment and the situated obstacles. Then provide
a strategy to follow the path segments and in an efficient way and perform the
flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in
order to achieve the flight stability as well as to perform the correct maneuver; to
avoid the possible collisions and track the waypoints.
All the proposed algorithms have been verified with real flights in both indoor
and outdoor environments, taking into consideration the visual conditions; such as
illumination and textures. The obtained results have been validated against other
systems; such as VICON motion capture system, DGPS in the case of pose estimate
algorithm. In addition, the proposed algorithms have been compared with several
previous works in the state of the art, and are results proves the improvement in
the accuracy and the robustness of the proposed algorithms.
Finally, this dissertation concludes that the visual sensors have the advantages
of lightweight and low consumption and provide reliable information, which is
considered as a powerful tool in the navigation systems to increase the autonomy
of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados
(UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar
tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos
tratan de resolver problemas de la navegacion tanto en ambientes interiores como
al aire libre basandose principalmente en la informacion visual captada por las camaras
monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores
visuales bien como fuente principal de datos o complementando a otros sensores
en el suministro de informacion util, con el fin de mejorar la precision y la
robustez de los procesos de deteccion.
La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas
de vision por computador: (I) Estimacion de la Posicion y la Orientacion
(Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion
en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el
descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia
y disminuye el tiempo computacional. De esta manera, se soluciona el
problema de la estimacion de la posicion basandose en la descomposicion de las
homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion
colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales
que se encuentran en su camino. El algoritmo de deteccion imita comportamientos
humanos para detectar los obstaculos que se acercan, mediante el analisis de la
magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado
con los ratios de expansion de los contornos convexos construidos alrededor
de los puntos caracteristicos detectados en frames consecutivos. A continuacion,
comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo
decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo
extrae las zonas libres de colision alrededor del obstaculo y combinandolo
con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de
vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona
una estrategia para seguir los segmentos del trazado de una manera eficiente
y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer
soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en
la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como
realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de
referencia.
Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes
exteriores e interiores, tomando en consideracion condiciones visuales como
la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros
sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del
algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos
han sido comparados con trabajos anteriores recogidos en el estado del arte
con resultados que demuestran una mejora de la precision y la robustez de los algoritmos
propuestos.
Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener
un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo
hace una poderosa herramienta en los sistemas de navegacion para aumentar la
autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en IngenierÃa Eléctrica, Electrónica y AutomáticaPresidente: Carlo Regazzoni.- Secretario: Fernando GarcÃa Fernández.- Vocal: Pascual Campoy Cerver
Visual Guidance for Unmanned Aerial Vehicles with Deep Learning
Unmanned Aerial Vehicles (UAVs) have been widely applied in the military and civilian domains. In recent years, the operation mode of UAVs is evolving from teleoperation to autonomous flight. In order to fulfill the goal of autonomous flight, a reliable guidance system is essential. Since the combination of Global Positioning System (GPS) and Inertial Navigation System (INS) systems cannot sustain autonomous flight in some situations where GPS can be degraded or unavailable, using computer vision as a primary method for UAV guidance has been widely explored. Moreover, GPS does not provide any information to the robot on the presence of obstacles.
Stereo cameras have complex architecture and need a minimum baseline to generate disparity map. By contrast, monocular cameras are simple and require less hardware resources. Benefiting from state-of-the-art Deep Learning (DL) techniques, especially Convolutional Neural Networks (CNNs), a monocular camera is sufficient to extrapolate mid-level visual representations such as depth maps and optical flow (OF) maps from the environment. Therefore, the objective of this thesis is to develop a real-time visual guidance method for UAVs in cluttered environments using a monocular camera and DL.
The three major tasks performed in this thesis are investigating the development of DL techniques and monocular depth estimation (MDE), developing real-time CNNs for MDE, and developing visual guidance methods on the basis of the developed MDE system. A comprehensive survey is conducted, which covers Structure from Motion (SfM)-based methods, traditional handcrafted feature-based methods, and state-of-the-art DL-based methods. More importantly, it also investigates the application of MDE in robotics. Based on the survey, two CNNs for MDE are developed. In addition to promising accuracy performance, these two CNNs run at high frame rates (126 fps and 90 fps respectively), on a single modest power Graphical Processing Unit (GPU).
As regards the third task, the visual guidance for UAVs is first developed on top of the designed MDE networks. To improve the robustness of UAV guidance, OF maps are integrated into the developed visual guidance method. A cross-attention module is applied to fuse the features learned from the depth maps and OF maps. The fused features are then passed through a deep reinforcement learning (DRL) network to generate the policy for guiding the flight of UAV. Additionally, a simulation framework is developed which integrates AirSim, Unreal Engine and PyTorch. The effectiveness of the developed visual guidance method is validated through extensive experiments in the simulation framework
Modelling multi-rotor UAVs swarm deployment using virtual pheromones
In this work, a swarm behaviour for multi-rotor Unmanned Aerial Vehicles (UAVs) deployment will be presented. The main contribution of this behaviour is the use of a virtual device for quantitative sematectonic stigmergy providing more adaptable behaviours in complex environments. It is a fault tolerant highly robust behaviour that does not require prior information of the area to be covered, or to assume the existence of any kind of information signals (GPS, mobile communication networks …), taking into account the specific features of UAVs. This behaviour will be oriented towards emergency tasks. Their main goal will be to cover an area of the environment for later creating an ad-hoc communication network, that can be used to establish communications inside this zone. Although there are several papers on robotic deployment it is more difficult to find applications with UAV systems, mainly because of the existence of various problems that must be overcome including limitations in available sensory and on-board processing capabilities and low flight endurance. In addition, those behaviours designed for UAVs often have significant limitations on their ability to be used in real tasks, because they assume specific features, not easily applicable in a general way. Firstly, in this article the characteristics of the simulation environment will be presented. Secondly, a microscopic model for deployment and creation of ad-hoc networks, that implicitly includes stigmergy features, will be shown. Then, the overall swarm behaviour will be modeled, providing a macroscopic model of this behaviour. This model can accurately predict the number of agents needed to cover an area as well as the time required for the deployment process. An experimental analysis through simulation will be carried out in order to verify our models. In this analysis the influence of both the complexity of the environment and the stigmergy system will be discussed, given the data obtained in the simulation. In addition, the macroscopic and microscopic models will be compared verifying the number of predicted individuals for each state regarding the simulation.This work was supported by Ministerio de EconomÃa y Competitividad (Spain) http://www.mineco.gob.es/portal/site/mineco/, project TIN2013-40982-R. Project co-financed with FEDER funds
- …