1,291 research outputs found

    TOWARDS AUTONOMOUS VERTICAL LANDING ON SHIP-DECKS USING COMPUTER VISION

    Get PDF
    The objective of this dissertation is to develop and demonstrate autonomous ship-board landing with computer vision. The problem is hard primarily due to the unpredictable stochastic nature of deck motion. The work involves a fundamental understanding of how vision works, what are needed to implement it, how it interacts with aircraft controls, the necessary and sufficient hardware, and software, how it differs from human vision, its limits, and finally the avenues of growth in the context of aircraft landing. The ship-deck motion dataset is provided by the U.S. Navy. This data is analyzed to gain fundamental understanding and is then used to replicate stochastic deck motion in a laboratory setting on a six degrees of freedom motion platform, also called Stewart platform. The method uses a shaping filter derived from the dataset to excite the platform. An autonomous quadrotor UAV aircraft is designed and fabricated for experimental testing of vision-based landing methods. The entire structure, avionics architecture, and flight controls for the aircraft are completely developed in-house. This provides the flexibility and fundamental understanding needed for this research. A fiducial-based vision system is first designed for detection and tracking of ship-deck. This is then utilized to design a tracking controller with the best possible bandwidth to track the deck with minimum error. Systematic experiments are conducted with static, sinusoidal, and stochastic motions to quantify the tracking performance. A feature-based vision system is designed next. Simple experiments are used to quantitatively and qualitatively evaluate the superior robustness of feature-based vision under various degraded visual conditions. This includes: (1) partial occlusion, (2) illumination variation, (3) glare, and (4) water distortion. The weight and power penalty for using feature-based vision are also determined. The results show that it is possible to autonomously land on ship-deck using computer vision alone. An autonomous aircraft can be constructed with only an IMU and a Visual Odometry software running on stereo camera. The aircraft then only needs a monocular, global shutter, high frame rate camera as an extra sensor to detect ship-deck and estimate its relative position. The relative velocity however needs to be derived using Kalman filter on the position signal. For the filter, knowledge of disturbance/motion spectrum is not needed, but a white noise disturbance model is sufficient. For control, a minimum bandwidth of 0.15 Hz is required. For vision, a fiducial is not needed. A feature-rich landing area is all that is required. The limits of the algorithm are set by occlusion(80\% tolerable), illumination (20,000 lux-0.01 lux), angle of landing (up to 45 degrees), 2D nature of features, and motion blur. Future research should extend the capability to 3D features and use of event-based cameras. Feature-based vision is more versatile and human-like than fiducial-based, but at the cost of 20 times higher computing power which is increasingly possible with modern processors. The goal is not an imitation of nature but derive inspiration from it and overcome its limitations. The feature-based landing opens a window towards emulating the best of human training and cognition, without its burden of latency, fatigue, and divided attention

    Intelligent Vision-based Autonomous Ship Landing of VTOL UAVs

    Full text link
    The paper discusses an intelligent vision-based control solution for autonomous tracking and landing of Vertical Take-Off and Landing (VTOL) capable Unmanned Aerial Vehicles (UAVs) on ships without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking; however, refers to a standardized visual cue installed on most Navy ships called the "horizon bar" for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning-based object detection for long-range ship tracking and classical computer vision for the estimation of aircraft relative position and orientation utilizing the horizon bar during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system was implemented on a quad-rotor UAV equipped with an onboard camera, and approach and landing were successfully demonstrated on a moving deck, which imitates realistic ship deck motions. Extensive simulations and flight tests were conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy

    Robust Reinforcement Learning Algorithm for Vision-based Ship Landing of UAVs

    Full text link
    This paper addresses the problem of developing an algorithm for autonomous ship landing of vertical take-off and landing (VTOL) capable unmanned aerial vehicles (UAVs), using only a monocular camera in the UAV for tracking and localization. Ship landing is a challenging task due to the small landing space, six degrees of freedom ship deck motion, limited visual references for localization, and adversarial environmental conditions such as wind gusts. We first develop a computer vision algorithm which estimates the relative position of the UAV with respect to a horizon reference bar on the landing platform using the image stream from a monocular vision camera on the UAV. Our approach is motivated by the actual ship landing procedure followed by the Navy helicopter pilots in tracking the horizon reference bar as a visual cue. We then develop a robust reinforcement learning (RL) algorithm for controlling the UAV towards the landing platform even in the presence of adversarial environmental conditions such as wind gusts. We demonstrate the superior performance of our algorithm compared to a benchmark nonlinear PID control approach, both in the simulation experiments using the Gazebo environment and in the real-world setting using a Parrot ANAFI quad-rotor and sub-scale ship platform undergoing 6 degrees of freedom (DOF) deck motion

    Vision-Based Autonomous Landing of a Quadrotor on the Perturbed Deck of an Unmanned Surface Vehicle

    Get PDF
    Autonomous landing on the deck of an unmanned surface vehicle (USV) is still a major challenge for unmanned aerial vehicles (UAVs). In this paper, a fiducial marker is located on the platform so as to facilitate the task since it is possible to retrieve its six-degrees of freedom relative-pose in an easy way. To compensate interruption in the marker’s observations, an extended Kalman filter (EKF) estimates the current USV’s position with reference to the last known position. Validation experiments have been performed in a simulated environment under various marine conditions. The results confirmed that the EKF provides estimates accurate enough to direct the UAV in proximity of the autonomous vessel such that the marker becomes visible again. Using only the odometry and the inertial measurements for the estimation, this method is found to be applicable even under adverse weather conditions in the absence of the global positioning system

    System-Level Analysis of Autonomous UAV Landing Sensitivities in GPS-Denied Environments

    Get PDF
    This paper presents an analysis of the navigation accuracy of an fixed-wing Unmanned Aerial Vehicle (UAV) landing on a aircraft carrier. The UAV is equipped with typical sensors used in landing scenarios. Data from the Office of Naval Research is used to accurately capture the behavior of the aircraft carrier. Through simulation, the position and orientation of both the UAV and carrier are estimated. The quality of the UAV’s sensors are varied to determine the sensitivity of these estimates to sensor accuracy. The system’s sensitivity to GPS signals and visual markers on the carrier is also analyzed. These results allow designers to choose the most economical sensors for landing systems that provide a safe and accurate landing

    Autonomous Drone Landings on an Unmanned Marine Vehicle using Deep Reinforcement Learning

    Get PDF
    This thesis describes with the integration of an Unmanned Surface Vehicle (USV) and an Unmanned Aerial Vehicle (UAV, also commonly known as drone) in a single Multi-Agent System (MAS). In marine robotics, the advantage offered by a MAS consists of exploiting the key features of a single robot to compensate for the shortcomings in the other. In this way, a USV can serve as the landing platform to alleviate the need for a UAV to be airborne for long periods time, whilst the latter can increase the overall environmental awareness thanks to the possibility to cover large portions of the prevailing environment with a camera (or more than one) mounted on it. There are numerous potential applications in which this system can be used, such as deployment in search and rescue missions, water and coastal monitoring, and reconnaissance and force protection, to name but a few. The theory developed is of a general nature. The landing manoeuvre has been accomplished mainly identifying, through artificial vision techniques, a fiducial marker placed on a flat surface serving as a landing platform. The raison d'etre for the thesis was to propose a new solution for autonomous landing that relies solely on onboard sensors and with minimum or no communications between the vehicles. To this end, initial work solved the problem while using only data from the cameras mounted on the in-flight drone. In the situation in which the tracking of the marker is interrupted, the current position of the USV is estimated and integrated into the control commands. The limitations of classic control theory used in this approached suggested the need for a new solution that empowered the flexibility of intelligent methods, such as fuzzy logic or artificial neural networks. The recent achievements obtained by deep reinforcement learning (DRL) techniques in end-to-end control in playing the Atari video-games suite represented a fascinating while challenging new way to see and address the landing problem. Therefore, novel architectures were designed for approximating the action-value function of a Q-learning algorithm and used to map raw input observation to high-level navigation actions. In this way, the UAV learnt how to land from high latitude without any human supervision, using only low-resolution grey-scale images and with a level of accuracy and robustness. Both the approaches have been implemented on a simulated test-bed based on Gazebo simulator and the model of the Parrot AR-Drone. The solution based on DRL was further verified experimentally using the Parrot Bebop 2 in a series of trials. The outcomes demonstrate that both these innovative methods are both feasible and practicable, not only in an outdoor marine scenario but also in indoor ones as well

    Autonomous High-Precision Landing on a Unmanned Surface Vehicle

    Get PDF
    THE MAIN GOAL OF THIS THESIS IS THE DEVELOPMENT OF AN AUTONOMOUS HIGH-PRECISION LANDING SYSTEM OF AN UAV IN AN AUTONOMOUS BOATIn this dissertation, a collaborative method for Multi Rotor Vertical Takeoff and Landing (MR-VTOL) Unmanned Aerial Vehicle (UAV)s’ autonomous landing is presented. The majority of common UAV autonomous landing systems adopt an approach in which the UAV scans the landing zone for a predetermined pattern, establishes relative positions, and uses those positions to execute the landing. These techniques have some shortcomings, such as extensive processing being carried out by the UAV itself and requires a lot of computational power. The fact that most of these techniques only work while the UAV is already flying at a low altitude, since the pattern’s elements must be plainly visible to the UAV’s camera, creates an additional issue. An RGB camera that is positioned in the landing zone and pointed up at the sky is the foundation of the methodology described throughout this dissertation. Convolutional Neural Networks and Inverse Kinematics approaches can be used to isolate and analyse the distinctive motion patterns the UAV presents because the sky is a very static and homogeneous environment. Following realtime visual analysis, a terrestrial or maritime robotic system can transmit orders to the UAV. The ultimate result is a model-free technique, or one that is not based on established patterns, that can help the UAV perform its landing manoeuvre. The method is trustworthy enough to be used independently or in conjunction with more established techniques to create a system that is more robust. The object detection neural network approach was able to detect the UAV in 91,57% of the assessed frames with a tracking error under 8%, according to experimental simulation findings derived from a dataset comprising three different films. Also created was a high-level position relative control system that makes use of the idea of an approach zone to the helipad. Every potential three-dimensional point within the zone corresponds to a UAV velocity command with a certain orientation and magnitude. The control system worked flawlessly to conduct the UAV’s landing within 6 cm of the target during testing in a simulated setting.Nesta dissertação, é apresentado um método de colaboração para a aterragem autónoma de Unmanned Aerial Vehicle (UAV)Multi Rotor Vertical Takeoff and Landing (MR-VTOL). A maioria dos sistemas de aterragem autónoma de UAV comuns adopta uma abordagem em que o UAV varre a zona de aterragem em busca de um padrão pré-determinado, estabelece posições relativas, e utiliza essas posições para executar a aterragem. Estas técnicas têm algumas deficiências, tais como o processamento extensivo a ser efectuado pelo próprio UAV e requer muita potência computacional. O facto de a maioria destas técnicas só funcionar enquanto o UAV já está a voar a baixa altitude, uma vez que os elementos do padrão devem ser claramente visíveis para a câmara do UAV, cria um problema adicional. Uma câmara RGB posicionada na zona de aterragem e apontada para o céu é a base da metodologia descrita ao longo desta dissertação. As Redes Neurais Convolucionais e as abordagens da Cinemática Inversa podem ser utilizadas para isolar e analisar os padrões de movimento distintos que o UAV apresenta, porque o céu é um ambiente muito estático e homogéneo. Após análise visual em tempo real, um sistema robótico terrestre ou marítimo pode transmitir ordens para o UAV. O resultado final é uma técnica sem modelo, ou que não se baseia em padrões estabelecidos, que pode ajudar o UAV a realizar a sua manobra de aterragem. O método é suficientemente fiável para ser utilizado independentemente ou em conjunto com técnicas mais estabelecidas para criar um sistema que seja mais robusto. A abordagem da rede neural de detecção de objectos foi capaz de detectar o UAV em 91,57% dos fotogramas avaliados com um erro de rastreio inferior a 8%, de acordo com resultados de simulação experimental derivados de um conjunto de dados composto por três filmes diferentes. Também foi criado um sistema de controlo relativo de posição de alto nível que faz uso da ideia de uma zona de aproximação ao heliporto. Cada ponto tridimensional potencial dentro da zona corresponde a um comando de velocidade do UAV com uma certa orientação e magnitude. O sistema de controlo funcionou sem falhas para conduzir a aterragem do UAV dentro de 6 cm do alvo durante os testes num cenário simulado. Traduzido com a versão gratuita do tradutor - www.DeepL.com/Translato
    • …
    corecore