112 research outputs found

    Autonomous High-Precision Landing on a Unmanned Surface Vehicle

    Get PDF
    THE MAIN GOAL OF THIS THESIS IS THE DEVELOPMENT OF AN AUTONOMOUS HIGH-PRECISION LANDING SYSTEM OF AN UAV IN AN AUTONOMOUS BOATIn this dissertation, a collaborative method for Multi Rotor Vertical Takeoff and Landing (MR-VTOL) Unmanned Aerial Vehicle (UAV)s’ autonomous landing is presented. The majority of common UAV autonomous landing systems adopt an approach in which the UAV scans the landing zone for a predetermined pattern, establishes relative positions, and uses those positions to execute the landing. These techniques have some shortcomings, such as extensive processing being carried out by the UAV itself and requires a lot of computational power. The fact that most of these techniques only work while the UAV is already flying at a low altitude, since the pattern’s elements must be plainly visible to the UAV’s camera, creates an additional issue. An RGB camera that is positioned in the landing zone and pointed up at the sky is the foundation of the methodology described throughout this dissertation. Convolutional Neural Networks and Inverse Kinematics approaches can be used to isolate and analyse the distinctive motion patterns the UAV presents because the sky is a very static and homogeneous environment. Following realtime visual analysis, a terrestrial or maritime robotic system can transmit orders to the UAV. The ultimate result is a model-free technique, or one that is not based on established patterns, that can help the UAV perform its landing manoeuvre. The method is trustworthy enough to be used independently or in conjunction with more established techniques to create a system that is more robust. The object detection neural network approach was able to detect the UAV in 91,57% of the assessed frames with a tracking error under 8%, according to experimental simulation findings derived from a dataset comprising three different films. Also created was a high-level position relative control system that makes use of the idea of an approach zone to the helipad. Every potential three-dimensional point within the zone corresponds to a UAV velocity command with a certain orientation and magnitude. The control system worked flawlessly to conduct the UAV’s landing within 6 cm of the target during testing in a simulated setting.Nesta dissertação, é apresentado um método de colaboração para a aterragem autónoma de Unmanned Aerial Vehicle (UAV)Multi Rotor Vertical Takeoff and Landing (MR-VTOL). A maioria dos sistemas de aterragem autónoma de UAV comuns adopta uma abordagem em que o UAV varre a zona de aterragem em busca de um padrão pré-determinado, estabelece posições relativas, e utiliza essas posições para executar a aterragem. Estas técnicas têm algumas deficiências, tais como o processamento extensivo a ser efectuado pelo próprio UAV e requer muita potência computacional. O facto de a maioria destas técnicas só funcionar enquanto o UAV já está a voar a baixa altitude, uma vez que os elementos do padrão devem ser claramente visíveis para a câmara do UAV, cria um problema adicional. Uma câmara RGB posicionada na zona de aterragem e apontada para o céu é a base da metodologia descrita ao longo desta dissertação. As Redes Neurais Convolucionais e as abordagens da Cinemática Inversa podem ser utilizadas para isolar e analisar os padrões de movimento distintos que o UAV apresenta, porque o céu é um ambiente muito estático e homogéneo. Após análise visual em tempo real, um sistema robótico terrestre ou marítimo pode transmitir ordens para o UAV. O resultado final é uma técnica sem modelo, ou que não se baseia em padrões estabelecidos, que pode ajudar o UAV a realizar a sua manobra de aterragem. O método é suficientemente fiável para ser utilizado independentemente ou em conjunto com técnicas mais estabelecidas para criar um sistema que seja mais robusto. A abordagem da rede neural de detecção de objectos foi capaz de detectar o UAV em 91,57% dos fotogramas avaliados com um erro de rastreio inferior a 8%, de acordo com resultados de simulação experimental derivados de um conjunto de dados composto por três filmes diferentes. Também foi criado um sistema de controlo relativo de posição de alto nível que faz uso da ideia de uma zona de aproximação ao heliporto. Cada ponto tridimensional potencial dentro da zona corresponde a um comando de velocidade do UAV com uma certa orientação e magnitude. O sistema de controlo funcionou sem falhas para conduzir a aterragem do UAV dentro de 6 cm do alvo durante os testes num cenário simulado. Traduzido com a versão gratuita do tradutor - www.DeepL.com/Translato

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    The Phoenix Drone: An Open-Source Dual-Rotor Tail-Sitter Platform for Research and Education

    Full text link
    In this paper, we introduce the Phoenix drone: the first completely open-source tail-sitter micro aerial vehicle (MAV) platform. The vehicle has a highly versatile, dual-rotor design and is engineered to be low-cost and easily extensible/modifiable. Our open-source release includes all of the design documents, software resources, and simulation tools needed to build and fly a high-performance tail-sitter for research and educational purposes. The drone has been developed for precision flight with a high degree of control authority. Our design methodology included extensive testing and characterization of the aerodynamic properties of the vehicle. The platform incorporates many off-the-shelf components and 3D-printed parts, in order to keep the cost down. Nonetheless, the paper includes results from flight trials which demonstrate that the vehicle is capable of very stable hovering and accurate trajectory tracking. Our hope is that the open-source Phoenix reference design will be useful to both researchers and educators. In particular, the details in this paper and the available open-source materials should enable learners to gain an understanding of aerodynamics, flight control, state estimation, software design, and simulation, while experimenting with a unique aerial robot.Comment: In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'19), Montreal, Canada, May 20-24, 201

    Flightmare: A Flexible Quadrotor Simulator

    Full text link
    Currently available quadrotor simulators have a rigid and highly-specialized structure: either are they really fast, physically accurate, or photo-realistic. In this work, we propose a paradigm-shift in the development of simulators: moving the trade-off between accuracy and speed from the developers to the end-users. We use this design idea to develop a novel modular quadrotor simulator: Flightmare. Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. This makes our simulator extremely fast: rendering achieves speeds of up to 230 Hz, while physics simulation of up to 200,000 Hz. In addition, Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. We demonstrate the flexibility of Flightmare by using it for two completely different robotic tasks: learning a sensorimotor control policy for a quadrotor and path-planning in a complex 3D environment

    Unmanned aerial vehicle abstraction layer: An abstraction layer to operate unmanned aerial vehicles

    Get PDF
    This article presents a software layer to abstract users of unmanned aerial vehicles from the specific hardware of the platform and the autopilot interfaces. The main objective of our unmanned aerial vehicle abstraction layer (UAL) is to simplify the development and testing of higher-level algorithms in aerial robotics by trying to standardize and simplify the interfaces with the unmanned aerial vehicles. Unmanned aerial vehicle abstraction layer supports operation with PX4 and DJI autopilots (among others), which are current leading manufacturers. Besides, unmanned aerial vehicle abstraction layer can work seamlessly with simulated or real platforms and it provides calls to issue standard commands such as taking off, landing or pose, and velocity controls. Even though unmanned aerial vehicle abstraction layer is under continuous development, a stable version is available for public use. We showcase the use of unmanned aerial vehicle abstraction layer with a set of applications coming from several European research projects, where different academic and industrial entities have adopted unmanned aerial vehicle abstraction layer as a common development framework

    UAS Simulator for Modeling, Analysis and Control in Free Flight and Physical Interaction

    Full text link
    This paper presents the ARCAD simulator for the rapid development of Unmanned Aerial Systems (UAS), including underactuated and fully-actuated multirotors, fixed-wing aircraft, and Vertical Take-Off and Landing (VTOL) hybrid vehicles. The simulator is designed to accelerate these aircraft's modeling and control design. It provides various analyses of the design and operation, such as wrench-set computation, controller response, and flight optimization. In addition to simulating free flight, it can simulate the physical interaction of the aircraft with its environment. The simulator is written in MATLAB to allow rapid prototyping and is capable of generating graphical visualization of the aircraft and the environment in addition to generating the desired plots. It has been used to develop several real-world multirotor and VTOL applications. The source code is available at https://github.com/keipour/aircraft-simulator-matlab.Comment: In proceedings of the 2023 AIAA SciTech Forum, Session: Air and Space Vehicle Dynamics, Systems, and Environments II

    HILS based Waypoint Simulation for Fixed Wing Unmanned Aerial Vehicle (UAV)

    Get PDF
    Hardware in loop simulation HILS-based waypoint simulation for fixed wing unmanned aerial vehicles is proposed in this paper. It uses an open-source arducopter as a flight controller, mission planner, and X-plane simulator. Waypoint simulation is carried out in the flight controller and executed in an X-plane simulator through a mission planner. A fixed wing unmanned aerial vehicle with an inverted T tail configuration has been chosen to study and validate waypoint flight control algorithms. The data transmission between mission planner and flight controller is done by serial protocol, whereas data exchange between X-plane and mission planner is done by User Datagram Protocol (UDP). APM mission planner is used as a machine interface to exchange data between the flight controller and the user. User inputs and flight gain parameters, both inner loop and outer loop, can be modified with the help of a mission planner. In addition to that, the mission planner provides a visual output representation of flight data and navigation algorithm
    • …
    corecore