6 research outputs found

    System-Level Analysis of Autonomous UAV Landing Sensitivities in GPS-Denied Environments

    Get PDF
    This paper presents an analysis of the navigation accuracy of an fixed-wing Unmanned Aerial Vehicle (UAV) landing on a aircraft carrier. The UAV is equipped with typical sensors used in landing scenarios. Data from the Office of Naval Research is used to accurately capture the behavior of the aircraft carrier. Through simulation, the position and orientation of both the UAV and carrier are estimated. The quality of the UAV’s sensors are varied to determine the sensitivity of these estimates to sensor accuracy. The system’s sensitivity to GPS signals and visual markers on the carrier is also analyzed. These results allow designers to choose the most economical sensors for landing systems that provide a safe and accurate landing

    Visual Servoing Approach for Autonomous UAV Landing on a Moving Vehicle

    Full text link
    We present a method to autonomously land an Unmanned Aerial Vehicle on a moving vehicle with a circular (or elliptical) pattern on the top. A visual servoing controller approaches the ground vehicle using velocity commands calculated directly in image space. The control laws generate velocity commands in all three dimensions, eliminating the need for a separate height controller. The method has shown the ability to approach and land on the moving deck in simulation, indoor and outdoor environments, and compared to the other available methods, it has provided the fastest landing approach. It does not rely on additional external setup, such as RTK, motion capture system, ground station, offboard processing, or communication with the vehicle, and it requires only a minimal set of hardware and localization sensors. The videos and source codes can be accessed from http://theairlab.org/landing-on-vehicle.Comment: 24 page

    Path Planning and Control of UAV using Machine Learning and Deep Reinforcement Learning Techniques

    Get PDF
    Uncrewed Aerial Vehicles (UAVs) are playing an increasingly signifcant role in modern life. In the past decades, lots of commercial and scientifc communities all over the world have been developing autonomous techniques of UAV for a broad range of applications, such as forest fre monitoring, parcel delivery, disaster rescue, natural resource exploration, and surveillance. This brings a large number of opportunities and challenges for UAVs to improve their abilities in path planning, motion control and fault-tolerant control (FTC) directions. Meanwhile, due to the powerful decisionmaking, adaptive learning and pattern recognition capabilities of machine learning (ML) and deep reinforcement learning (DRL), the use of ML and DRL have been developing rapidly and obtain major achievement in a variety of applications. However, there is not many researches on the ML and DRl in the feld of motion control and real-time path planning of UAVs. This thesis focuses on the development of ML and DRL in the path planning, motion control and FTC of UAVs. A number of ontributions pertaining to the state space defnition, reward function design and training method improvement have been made in this thesis, which improve the effectiveness and efciency of applying DRL in UAV motion control problems. In addition to the control problems, this thesis also presents real-time path planning contributions, including relative state space defnition and human pedestrian inspired reward function, which provide a reliable and effective solution of the real-time path planning in a complex environment

    Infrastructure-free Shipdeck Tracking for Autonomous Landing

    No full text
    <p>Shipdeck landing is one of the most challenging tasks for a rotorcraft. Current autonomous rotorcraft use shipdeck mounted transponders to measure the relative pose of the vehicle to the landing pad. This tracking system is not only expensive but renders an unequipped ship unlandable. We address the challenge of tracking shipdeck without additional infrastructure on the deck. We present two methods based on video and lidar that are able to track the shipdeck starting at a considerable distance from the ship. This redundant sensor design enables us to have two independent tracking systems. We show the results of the tracking algorithms in 3 different environments, 1. field testing results on actual helicopter flights, 2. in simulation with a moving shipdeck for lidar based tracking and 3. in laboratory using an occluded and moving scaled model of a landing deck for camera based tracking. The complimentary modalities allow shipdeck tracking under varying conditions.</p

    Infrastructure-free Shipdeck Tracking for Autonomous Landing

    No full text
    Abstract — Shipdeck landing is one of the most challenging tasks for a rotorcraft. Current autonomous rotorcraft use shipdeck mounted transponders to measure the relative pose of the vehicle to the landing pad. This tracking system is not only expensive but renders an unequipped ship unlandable. We address the challenge of tracking shipdeck without additional infrastructure on the deck. We present two methods based on video and lidar that are able to track the shipdeck starting at a considerable distance from the ship. This redundant sensor design enables us to have two independent tracking systems. We show the results of the tracking algorithms in 3 different environments, 1. field testing results on actual helicopter flights, 2. in simulation with a moving shipdeck for lidar based tracking and 3. in laboratory using an occluded and moving scaled model of a landing deck for camera based tracking. The complimentary modalities allow shipdeck tracking under varying conditions. I
    corecore