234 research outputs found
Comparative Study of Indoor Navigation Systems for Autonomous Flight
Recently, Unmanned Aerial Vehicles (UAVs) have attracted the society and researchers due to the capability to perform in economic, scientific and emergency scenarios, and are being employed in large number of applications especially during the hostile environments. They can operate autonomously for both indoor and outdoor applications mainly including search and rescue, manufacturing, forest fire tracking, remote sensing etc. For both environments, precise localization plays a critical role in order to achieve high performance flight and interacting with the surrounding objects. However, for indoor areas with degraded or denied Global Navigation Satellite System (GNSS) situation, it becomes challenging to control UAV autonomously especially where obstacles are unidentified. A large number of techniques by using various technologies are proposed to get rid of these limits. This paper provides a comparison of such existing solutions and technologies available for this purpose with their strengths and limitations. Further, a summary of current research status with unresolved issues and opportunities is provided that would provide research directions to the researchers of the similar interests
Optical Navigation Sensor for Runway Relative Positioning of Aircraft during Final Approach
Precise navigation is often performed by sensor fusion of different sensors. Among these sensors, optical sensors use image features to obtain the position and attitude of the camera. Runway relative navigation during final approach is a special case where robust and continuous detection of the runway is required. This paper presents a robust threshold marker detection method for monocular cameras and introduces an on-board real-time implementation with flight test results. Results with narrow and wide field-of-view optics are compared. The image processing approach is also evaluated on image data captured by a different on-board system. The pure optical approach of this paper increases sensor redundancy because it does not require input from an inertial sensor as most of the robust runway detectors
Definition of the 2005 flight deck environment
A detailed description of the functional requirements necessary to complete any normal commercial flight or to handle any plausible abnormal situation is provided. This analysis is enhanced with an examination of possible future developments and constraints in the areas of air traffic organization and flight deck technologies (including new devices and procedures) which may influence the design of 2005 flight decks. This study includes a discussion on the importance of a systematic approach to identifying and solving flight deck information management issues, and a description of how the present work can be utilized as part of this approach. While the intent of this study was to investigate issues surrounding information management in 2005-era supersonic commercial transports, this document may be applicable to any research endeavor related to future flight deck system design in either supersonic or subsonic airplane development
Drone heading calculation indoors
Abstract. Aim of this master’s thesis was to study drone flying indoors and propose a drone-implemented system that enables the drone heading calculation. In the outdoors, the heading is calculated effectively with a drone’s sensors but using them indoors is limited. Indoor positioning currently has not both low-cost and reliable solution for drone heading calculating. The differences between indoor flying principles and outdoor flying principles of the drone are described in the beginning of the thesis. Then different ways to determine the drone’s heading indoors and how they compare with one another are discussed. Finally, two different heading calculation methods are implemented and tested. The methods are based on using multiple location measurements on the drone and using machine vision together with machine learning. Both methods are affordable and are evaluated to see if they could enable drone flying indoors. First method gives out potential results based on testing results, but it needs further development to be able to always provide reliable heading. Second method shows poor results based on verification.Dronen lentosuunnan laskenta sisätiloissa. Tiivistelmä. Työn tavoitteena oli tutkia dronen lentämistä sisätiloissa ja ehdottaa sitä varten droneen implementoitavaa systeemiä, joka mahdollistaa dronen suunnan laskennan. Ulkona suuntatieto saadaan dronen sensorien avulla, mutta sisätiloissa niiden tarkkuus ei riitä samalla tavalla. Sisätilapaikannuksessa ei ole olemassa sekä edullista että luotettavaa ratkaisua dronen suunnan laskentaan. Työssä perehdytään aluksi dronen lentämisen periaatteisiin sisätiloissa ja miten ne eroavat ulkona lentämisestä. Sitten kerrotaan erilaisista keinoista määrittää dronen suunta sisätiloissa ja niiden keskinäisestä vertailusta. Lopuksi testataan kahta erilaista suunnan-laskenta-menetelmää, jotka perustuvat paikkatiedon käyttöön ja konenäköön yhdessä koneoppimisen kanssa. Menetelmät ovat edullisia ja niiden sopivuutta dronen sisälennätykseen arvioidaan. Ensimmäinen menetelmä antaa hyviä testituloksia mutta tarvitsee lisää jatkokehitystä, jotta se voisi antaa aina luotettavaa suuntatietoa. Toinen menetelmä antaa heikkoja tuloksia verifioinnin perusteella
Accurate navigation applied to landing maneuvers on mobile platforms for unmanned aerial vehicles
Drones are quickly developing worldwide and in Europe in particular. They represent the future of a high percentage of operations that are currently carried out by manned aviation or satellites. Compared to fixed-wing UAVs, rotary wing UAVs have as advantages the hovering, agile maneuvering and vertical take-off and landing capabilities, so that they are currently the most used aerial robotic platforms.
In operations from ships and boats, the final approach and the landing maneuver are the phases of the operation that involves a higher risk and where it is required a higher level of precision in the position and velocity estimation, along with a high level of robustness in the operation. In the framework of the EC-SAFEMOBIL and the REAL projects, this thesis is devoted to the development of a guidance and navigation system that allows completing an autonomous mission from the take-off to the landing phase of a rotary-wing UAV (RUAV). More specifically, this thesis is focused on the development of new strategies and algorithms that provide sufficiently accurate motion estimation during the autonomous landing on mobile platforms without using the GNSS constellations.
In one hand, for the phases of the flights where it is not required a centimetric accuracy solution, here it is proposed a new navigation approach that extends the current estimation techniques by using the EGNOS integrity information in the sensor fusion filter. This approach allows improving the accuracy of the estimation solution and the safety of the overall system, and also helps the remote pilot to have a more complete awareness of the operation status while flying the UAV
In the other hand, for those flight phases where the accuracy is a critical factor in the safety of the operation, this thesis presents a precise navigation system that allows rotary-wing UAVs to approach and land safely on moving platforms, without using GNSS at any stage of the landing maneuver, and with a centimeter-level accuracy and high level of robustness. This system implements a novel concept where the relative position and velocity between the aerial vehicle and the landing platform can be calculated from a radio-beacon system installed in both the UAV and the landing platform or through the angles of a cable that physically connects the UAV and the landing platform. The use of a cable also incorporates several extra benefits, like increasing the precision in the control of the UAV altitude. It also facilitates to center the UAV right on top of the expected landing position and increases the stability of the UAV just after contacting the landing platform.
The proposed guidance and navigation systems have been implemented in an unmanned rotorcraft and a large number of tests have been carried out under different conditions for measuring the accuracy and the robustness of the proposed solution. Results showed that the developed system allows landing with centimeter accuracy by using only local sensors and that the UAV is able to follow a mobile landing platform in multiple trajectories at different velocities
A framework for autonomous mission and guidance control of unmanned aerial vehicles based on computer vision techniques
A computação visual é uma área do conhecimento que estuda o desenvolvimento de sistemas artificiais capazes de detectar e desenvolver a percepção do meio ambiente através de informações de imagem ou dados multidimensionais. A percepção visual e a manipulação são combinadas em sistemas robóticos através de duas etapas "olhar"e depois "movimentar-se", gerando um laço de controle de feedback visual. Neste contexto, existe um interesse crescimente no uso dessas técnicas em veículos aéreos não tripulados (VANTs), também conhecidos como drones. Essas técnicas são aplicadas para posicionar o drone em modo de vôo autônomo, ou para realizar a detecção de regiões para vigilância aérea ou pontos de interesse. Os sistemas de computação visual geralmente tomam três passos em sua operação, que são: aquisição de dados em forma numérica, processamento de dados e análise de dados. A etapa de aquisição de dados é geralmente realizada por câmeras e sensores de proximidade. Após a aquisição de dados, o computador embarcado realiza o processamento de dados executando algoritmos com técnicas de medição (variáveis, índice e coeficientes), detecção (padrões, objetos ou áreas) ou monitoramento (pessoas, veículos ou animais). Os dados processados são analisados e convertidos em comandos de decisão para o controle para o sistema robótico autônomo Visando realizar a integração dos sistemas de computação visual com as diferentes plataformas de VANTs, este trabalho propõe o desenvolvimento de um framework para controle de missão e guiamento de VANTs baseado em visão computacional. O framework é responsável por gerenciar, codificar, decodificar e interpretar comandos trocados entre as controladoras de voo e os algoritmos de computação visual. Como estudo de caso, foram desenvolvidos dois algoritmos destinados à aplicação em agricultura de precisão. O primeiro algoritmo realiza o cálculo de um coeficiente de reflectância visando a aplicação auto-regulada e eficiente de agroquímicos, e o segundo realiza a identificação das linhas de plantas para realizar o guiamento dos VANTs sobre a plantação. O desempenho do framework e dos algoritmos propostos foi avaliado e comparado com o estado da arte, obtendo resultados satisfatórios na implementação no hardware embarcado.Cumputer Vision is an area of knowledge that studies the development of artificial systems capable of detecting and developing the perception of the environment through image information or multidimensional data. Nowadays, vision systems are widely integrated into robotic systems. Visual perception and manipulation are combined in two steps "look" and then "move", generating a visual feedback control loop. In this context, there is a growing interest in using computer vision techniques in unmanned aerial vehicles (UAVs), also known as drones. These techniques are applied to position the drone in autonomous flight mode, or to perform the detection of regions for aerial surveillance or points of interest. Computer vision systems generally take three steps to the operation, which are: data acquisition in numerical form, data processing and data analysis. The data acquisition step is usually performed by cameras or proximity sensors. After data acquisition, the embedded computer performs data processing by performing algorithms with measurement techniques (variables, index and coefficients), detection (patterns, objects or area) or monitoring (people, vehicles or animals). The resulting processed data is analyzed and then converted into decision commands that serve as control inputs for the autonomous robotic system In order to integrate the visual computing systems with the different UAVs platforms, this work proposes the development of a framework for mission control and guidance of UAVs based on computer vision. The framework is responsible for managing, encoding, decoding, and interpreting commands exchanged between flight controllers and visual computing algorithms. As a case study, two algorithms were developed to provide autonomy to UAVs intended for application in precision agriculture. The first algorithm performs the calculation of a reflectance coefficient used to perform the punctual, self-regulated and efficient application of agrochemicals. The second algorithm performs the identification of crop lines to perform the guidance of the UAVs on the plantation. The performance of the proposed framework and proposed algorithms was evaluated and compared with the state of the art, obtaining satisfactory results in the implementation of embedded hardware
Artificial Intelligence Applications for Drones Navigation in GPS-denied or degraded Environments
L'abstract è presente nell'allegato / the abstract is in the attachmen
Autonomous High-Precision Landing on a Unmanned Surface Vehicle
THE MAIN GOAL OF THIS THESIS IS THE DEVELOPMENT OF AN AUTONOMOUS
HIGH-PRECISION LANDING SYSTEM OF AN UAV IN AN AUTONOMOUS BOATIn this dissertation, a collaborative method for Multi Rotor Vertical Takeoff and Landing
(MR-VTOL) Unmanned Aerial Vehicle (UAV)s’ autonomous landing is presented. The
majority of common UAV autonomous landing systems adopt an approach in which the
UAV scans the landing zone for a predetermined pattern, establishes relative positions,
and uses those positions to execute the landing. These techniques have some shortcomings,
such as extensive processing being carried out by the UAV itself and requires a lot
of computational power. The fact that most of these techniques only work while the UAV
is already flying at a low altitude, since the pattern’s elements must be plainly visible to
the UAV’s camera, creates an additional issue. An RGB camera that is positioned in the
landing zone and pointed up at the sky is the foundation of the methodology described
throughout this dissertation. Convolutional Neural Networks and Inverse Kinematics
approaches can be used to isolate and analyse the distinctive motion patterns the UAV
presents because the sky is a very static and homogeneous environment. Following realtime
visual analysis, a terrestrial or maritime robotic system can transmit orders to the
UAV.
The ultimate result is a model-free technique, or one that is not based on established
patterns, that can help the UAV perform its landing manoeuvre. The method is trustworthy
enough to be used independently or in conjunction with more established techniques
to create a system that is more robust. The object detection neural network approach was
able to detect the UAV in 91,57% of the assessed frames with a tracking error under 8%,
according to experimental simulation findings derived from a dataset comprising three
different films. Also created was a high-level position relative control system that makes
use of the idea of an approach zone to the helipad. Every potential three-dimensional
point within the zone corresponds to a UAV velocity command with a certain orientation
and magnitude. The control system worked flawlessly to conduct the UAV’s landing
within 6 cm of the target during testing in a simulated setting.Nesta dissertação, é apresentado um método de colaboração para a aterragem autónoma
de Unmanned Aerial Vehicle (UAV)Multi Rotor Vertical Takeoff and Landing (MR-VTOL).
A maioria dos sistemas de aterragem autónoma de UAV comuns adopta uma abordagem
em que o UAV varre a zona de aterragem em busca de um padrão pré-determinado, estabelece
posições relativas, e utiliza essas posições para executar a aterragem. Estas técnicas
têm algumas deficiências, tais como o processamento extensivo a ser efectuado pelo próprio
UAV e requer muita potência computacional. O facto de a maioria destas técnicas só
funcionar enquanto o UAV já está a voar a baixa altitude, uma vez que os elementos do
padrão devem ser claramente visíveis para a câmara do UAV, cria um problema adicional.
Uma câmara RGB posicionada na zona de aterragem e apontada para o céu é a base da
metodologia descrita ao longo desta dissertação. As Redes Neurais Convolucionais e as
abordagens da Cinemática Inversa podem ser utilizadas para isolar e analisar os padrões
de movimento distintos que o UAV apresenta, porque o céu é um ambiente muito estático
e homogéneo. Após análise visual em tempo real, um sistema robótico terrestre ou
marítimo pode transmitir ordens para o UAV.
O resultado final é uma técnica sem modelo, ou que não se baseia em padrões estabelecidos,
que pode ajudar o UAV a realizar a sua manobra de aterragem. O método é
suficientemente fiável para ser utilizado independentemente ou em conjunto com técnicas
mais estabelecidas para criar um sistema que seja mais robusto. A abordagem da rede
neural de detecção de objectos foi capaz de detectar o UAV em 91,57% dos fotogramas
avaliados com um erro de rastreio inferior a 8%, de acordo com resultados de simulação
experimental derivados de um conjunto de dados composto por três filmes diferentes.
Também foi criado um sistema de controlo relativo de posição de alto nível que faz uso
da ideia de uma zona de aproximação ao heliporto. Cada ponto tridimensional potencial
dentro da zona corresponde a um comando de velocidade do UAV com uma certa orientação
e magnitude. O sistema de controlo funcionou sem falhas para conduzir a aterragem
do UAV dentro de 6 cm do alvo durante os testes num cenário simulado.
Traduzido com a versão gratuita do tradutor - www.DeepL.com/Translato
- …