1,215 research outputs found
A Survey of Air-to-Ground Propagation Channel Modeling for Unmanned Aerial Vehicles
In recent years, there has been a dramatic increase in the use of unmanned
aerial vehicles (UAVs), particularly for small UAVs, due to their affordable
prices, ease of availability, and ease of operability. Existing and future
applications of UAVs include remote surveillance and monitoring, relief
operations, package delivery, and communication backhaul infrastructure.
Additionally, UAVs are envisioned as an important component of 5G wireless
technology and beyond. The unique application scenarios for UAVs necessitate
accurate air-to-ground (AG) propagation channel models for designing and
evaluating UAV communication links for control/non-payload as well as payload
data transmissions. These AG propagation models have not been investigated in
detail when compared to terrestrial propagation models. In this paper, a
comprehensive survey is provided on available AG channel measurement campaigns,
large and small scale fading channel models, their limitations, and future
research directions for UAV communication scenarios
Vision-Based navigation system for unmanned aerial vehicles
MenciĂłn Internacional en el tĂtulo de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles
(UAVs) with a robust navigation system; in order to allow the UAVs to perform
complex tasks autonomously and in real-time. The proposed algorithms deal with
solving the navigation problem for outdoor as well as indoor environments, mainly
based on visual information that is captured by monocular cameras. In addition,
this dissertation presents the advantages of using the visual sensors as the main
source of data, or complementing other sensors in providing useful information; in
order to improve the accuracy and the robustness of the sensing purposes.
The dissertation mainly covers several research topics based on computer vision
techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of
the UAV. This algorithm is based on the combination of SIFT detector and FREAK
descriptor; which maintains the performance of the feature points matching and decreases
the computational time. Thereafter, the pose estimation problem is solved
based on the decomposition of the world-to-frame and frame-to-frame homographies.
(II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to
sense and detect the frontal obstacles that are situated in its path. The detection
algorithm mimics the human behaviors for detecting the approaching obstacles; by
analyzing the size changes of the detected feature points, combined with the expansion
ratios of the convex hull constructed around the detected feature points
from consecutive frames. Then, by comparing the area ratio of the obstacle and the
position of the UAV, the method decides if the detected obstacle may cause a collision.
Finally, the algorithm extracts the collision-free zones around the obstacle,
and combining with the tracked waypoints, the UAV performs the avoidance maneuver.
(III) Navigation Guidance, which generates the waypoints to determine
the flight path based on environment and the situated obstacles. Then provide
a strategy to follow the path segments and in an efficient way and perform the
flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in
order to achieve the flight stability as well as to perform the correct maneuver; to
avoid the possible collisions and track the waypoints.
All the proposed algorithms have been verified with real flights in both indoor
and outdoor environments, taking into consideration the visual conditions; such as
illumination and textures. The obtained results have been validated against other
systems; such as VICON motion capture system, DGPS in the case of pose estimate
algorithm. In addition, the proposed algorithms have been compared with several
previous works in the state of the art, and are results proves the improvement in
the accuracy and the robustness of the proposed algorithms.
Finally, this dissertation concludes that the visual sensors have the advantages
of lightweight and low consumption and provide reliable information, which is
considered as a powerful tool in the navigation systems to increase the autonomy
of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados
(UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar
tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos
tratan de resolver problemas de la navegacion tanto en ambientes interiores como
al aire libre basandose principalmente en la informacion visual captada por las camaras
monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores
visuales bien como fuente principal de datos o complementando a otros sensores
en el suministro de informacion util, con el fin de mejorar la precision y la
robustez de los procesos de deteccion.
La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas
de vision por computador: (I) Estimacion de la Posicion y la Orientacion
(Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion
en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el
descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia
y disminuye el tiempo computacional. De esta manera, se soluciona el
problema de la estimacion de la posicion basandose en la descomposicion de las
homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion
colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales
que se encuentran en su camino. El algoritmo de deteccion imita comportamientos
humanos para detectar los obstaculos que se acercan, mediante el analisis de la
magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado
con los ratios de expansion de los contornos convexos construidos alrededor
de los puntos caracteristicos detectados en frames consecutivos. A continuacion,
comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo
decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo
extrae las zonas libres de colision alrededor del obstaculo y combinandolo
con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de
vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona
una estrategia para seguir los segmentos del trazado de una manera eficiente
y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer
soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en
la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como
realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de
referencia.
Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes
exteriores e interiores, tomando en consideracion condiciones visuales como
la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros
sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del
algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos
han sido comparados con trabajos anteriores recogidos en el estado del arte
con resultados que demuestran una mejora de la precision y la robustez de los algoritmos
propuestos.
Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener
un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo
hace una poderosa herramienta en los sistemas de navegacion para aumentar la
autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en IngenierĂa ElĂ©ctrica, ElectrĂłnica y AutomĂĄticaPresidente: Carlo Regazzoni.- Secretario: Fernando GarcĂa FernĂĄndez.- Vocal: Pascual Campoy Cerver
A Review of Radio Frequency Based Localization for Aerial and Ground Robots with 5G Future Perspectives
Efficient localization plays a vital role in many modern applications of
Unmanned Ground Vehicles (UGV) and Unmanned aerial vehicles (UAVs), which would
contribute to improved control, safety, power economy, etc. The ubiquitous 5G
NR (New Radio) cellular network will provide new opportunities for enhancing
localization of UAVs and UGVs. In this paper, we review the radio frequency
(RF) based approaches for localization. We review the RF features that can be
utilized for localization and investigate the current methods suitable for
Unmanned vehicles under two general categories: range-based and fingerprinting.
The existing state-of-the-art literature on RF-based localization for both UAVs
and UGVs is examined, and the envisioned 5G NR for localization enhancement,
and the future research direction are explored
Towards Adaptive, Self-Configuring Networked Unmanned Aerial Vehicles
Networked drones have the potential to transform various applications domains; yet their adoption particularly in indoor and forest environments has been stymied by the lack of accurate maps and autonomous navigation abilities in the absence of GPS, the lack of highly reliable, energy-efficient wireless communications, and the challenges of visually inferring and understanding an environment with resource-limited individual drones. We advocate a novel vision for the research community in the development of distributed, localized algorithms that enable the networked drones to dynamically coordinate to perform adaptive beam forming to achieve high capacity directional aerial communications, and collaborative machine learning to simultaneously localize, map and visually infer the challenging environment, even when individual drones are resource-limited in terms of computation and communication due to payload restrictions
Deep learning assisted time-frequency processing for speech enhancement on drones
This article fills the gap between the growing interest in signal processing based on Deep Neural Networks (DNN) and the new application of enhancing speech captured by microphones on a drone. In this context, the quality of the target sound is degraded significantly by the strong ego-noise from the rotating motors and propellers. We present the first work that integrates single-channel and multi-channel DNN-based approaches for speech enhancement on drones. We employ a DNN to estimate the ideal ratio masks at individual time-frequency bins, which are subsequently used to design three potential speech enhancement systems, namely single-channel ego-noise reduction (DNN-S), multi-channel beamforming (DNN-BF), and multi-channel time-frequency spatial filtering (DNN-TF). The main novelty lies in the proposed DNN-TF algorithm, which infers the noise-dominance probabilities at individual time-frequency bins from the DNN-estimated soft masks, and then incorporates them into a time-frequency spatial filtering framework for ego-noise reduction. By jointly exploiting the direction of arrival of the target sound, the time-frequency sparsity of the acoustic signals (speech and ego-noise) and the time-frequency noise-dominance probability, DNN-TF can suppress the ego-noise effectively in scenarios with very low signal-to-noise ratios (e.g. SNR lower than -15 dB), especially when the direction of the target sound is close to that of a source of the ego-noise. Experiments with real and simulated data show the advantage of DNN-TF over competing methods, including DNN-S, DNN-BF and the state-of-the-art time-frequency spatial filtering
RĂ©duction de l'Ă©go-bruit de robots
En robotique, il est dĂ©sirable dâĂ©quiper les robots du sens de lâaudition afin de mieux interagir avec les utilisateurs et lâenvironnement. Cependant, le bruit causĂ© par les actionneurs des robots, nommĂ© Ă©go-bruit, rĂ©duit considĂ©rablement la qualitĂ© des segments audios. ConsĂ©quemment, la performance des techniques de reconnaissance de la parole et de dĂ©tection dâĂ©vĂšnements sonores est limitĂ©e par la quantitĂ© de bruit que le robot produit durant ses mouvements. Le bruit gĂ©nĂ©rĂ© par les robots diffĂšre considĂ©rablement selon lâenvironnement, les moteurs, les matĂ©riaux utilisĂ©s et mĂȘme selon lâintĂ©gritĂ© des diffĂ©rentes composantes mĂ©caniques. Lâobjectif du projet est de concevoir un modĂšle de rĂ©duction dâĂ©go-bruit robuste utilisant plusieurs microphones et dâĂȘtre capable de le calibrer rapidement sur un robot mobile.
Ce mĂ©moire prĂ©sente une mĂ©thode de rĂ©duction de lâĂ©go-bruit combinant lâapprentissage de gabarit de matrice de covariance du bruit Ă un algorithme de formation de faisceau de rĂ©ponses Ă variance minimum sans distorsion. Lâapproche utilisĂ©e pour lâapprentissage des matrices de covariances permet dâenregistrer les caractĂ©ristiques spatiales de lâĂ©go-bruit en moins de deux minutes pour chaque nouvel environnement. Lâalgorithme de faisceau permet, quant Ă lui, de rĂ©duire lâĂ©go-bruit du signal bruitĂ© sans lâajout de distorsion nonlinĂ©aire dans le signal rĂ©sultant. La mĂ©thode est implĂ©mentĂ©e sous Robot Operating System pour une utilisation simple et rapide sur diffĂ©rents robots.
LâĂ©valuation de cette nouvelle mĂ©thode a Ă©tĂ© effectuĂ©e sur un robot rĂ©el dans trois environnements diffĂ©rents : une petite salle, une grande salle et un corridor de bureau. Lâaugmentation du ratio signal-bruit est dâenviron 10 dB et est constante entre les trois salles. La rĂ©duction du taux dâerreur des mots de la reconnaissance vocale se situe entre 30 % et 55 %. Le modĂšle a aussi Ă©tĂ© testĂ© pour la dĂ©tection dâĂ©vĂšnements sonores. Une augmentation de 7 % Ă 20 % de la prĂ©cision moyenne a Ă©tĂ© mesurĂ©e pour la dĂ©tection de la musique, mais aucune augmentation significative pour la parole, les cris, les portes qui ferment et les alarmes. La mĂ©thode proposĂ©e permet une utilisation plus accessible de la reconnaissance vocale sur des robots bruyants.
De plus, une analyse des principaux paramĂštres a permis de valider leurs impacts sur la performance du systĂšme. Les performances sont meilleures lorsque le systĂšme est calibrĂ© avec plus de bruit du robot et lorsque la longueur des segments utilisĂ©s est plus longue. La taille de la TransformĂ©e de Fourier rapide Ă court terme (Short-Time Fourier Transform) peut ĂȘtre rĂ©duite pour rĂ©duire le temps de traitement du systĂšme. Cependant, la taille de cette transformĂ©e impacte aussi la rĂ©solution des caractĂ©ristiques du signal rĂ©sultant. Un compromis doit ĂȘtre faire entre un faible temps de traitement et la qualitĂ© du signal en sortie du systĂšme
Dynamics of Outgassing and Plume Transport Revealed by Proximal Unmanned Aerial System (UAS) Measurements at VolcĂĄn Villarrica, Chile
Volcanic gas emissions are intimately linked to the dynamics of magma ascent and outgassing, and, on geological timescales, constitute an important source of volatiles to the Earthâs atmosphere. Measurements of gas composition and flux are therefore critical to both volcano monitoring and to determining the contribution of volcanoes to global geochemical cycles. However, significant gaps remain in our global inventories of volcanic emissions, (particularly for CO2, which requires proximal sampling of a concentrated plume) for those volcanoes where the near-vent region is hazardous or inaccessible. Unmanned Aerial Systems (UAS) provide a robust and effective solution to proximal sampling of dense volcanic plumes in extreme volcanic environments. Here, we present gas compositional data acquired using a gas sensor payload aboard a UAS flown at VolcĂĄn Villarrica, Chile. We compare UAS-derived gas timeseries to simultaneous crater rim multi-GAS data and UV camera imagery to investigate early plume evolution. SO2 concentrations measured in the young proximal plume exhibit periodic variations that are well-correlated with the concentrations of other species. By combining molar gas ratios (CO2/SO2 = 1.48â1.68, H2O/SO2 = 67â75 and H2O/CO2 = 45â51) with the SO2 flux (142 ± 17 t/day) from UV camera images, we derive CO2 and H2O fluxes of ~150 t/day and ~2850 t/day, respectively. We observe good agreement between time-averaged molar gas ratios obtained from simultaneous UAS- and ground-based Multi-GAS acquisitions. However, the UAS measurements made in the young, less diluted plume reveal additional short-term periodic structure that reflects active degassing through discrete, audible gas exhalations.Alfred P. Sloan Foundation; Leverhulme Trus
- âŠ