1,147 research outputs found
Dynamic Landing of an Autonomous Quadrotor on a Moving Platform in Turbulent Wind Conditions
Autonomous landing on a moving platform presents unique challenges for
multirotor vehicles, including the need to accurately localize the platform,
fast trajectory planning, and precise/robust control. Previous works studied
this problem but most lack explicit consideration of the wind disturbance,
which typically leads to slow descents onto the platform. This work presents a
fully autonomous vision-based system that addresses these limitations by
tightly coupling the localization, planning, and control, thereby enabling fast
and accurate landing on a moving platform. The platform's position,
orientation, and velocity are estimated by an extended Kalman filter using
simulated GPS measurements when the quadrotor-platform distance is large, and
by a visual fiducial system when the platform is nearby. The landing trajectory
is computed online using receding horizon control and is followed by a boundary
layer sliding controller that provides tracking performance guarantees in the
presence of unknown, but bounded, disturbances. To improve the performance, the
characteristics of the turbulent conditions are accounted for in the
controller. The landing trajectory is fast, direct, and does not require
hovering over the platform, as is typical of most state-of-the-art approaches.
Simulations and hardware experiments are presented to validate the robustness
of the approach.Comment: 7 pages, 8 figures, ICRA2020 accepted pape
Detection and estimation of moving obstacles for a UAV
In recent years, research interest in Unmanned Aerial Vehicles (UAVs) has been grown rapidly because of their potential use for a wide range of applications. In this paper, we proposed a vision-based detection and position/velocity estimation of moving obstacle for a UAV. The knowledge of a moving obstacle's state, i.e., position, velocity, is essential to achieve better performance for an intelligent UAV system specially in autonomous navigation and landing tasks. The novelties are: (1) the design and implementation of a localization method using sensor fusion methodology which fuses Inertial Measurement Unit (IMU) signals and Pozyx signals; (2) The development of detection and estimation of moving obstacles method based on on-board vision system. Experimental results validate the effectiveness of the proposed approach. (C) 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved
Visual Servoing Approach for Autonomous UAV Landing on a Moving Vehicle
We present a method to autonomously land an Unmanned Aerial Vehicle on a
moving vehicle with a circular (or elliptical) pattern on the top. A visual
servoing controller approaches the ground vehicle using velocity commands
calculated directly in image space. The control laws generate velocity commands
in all three dimensions, eliminating the need for a separate height controller.
The method has shown the ability to approach and land on the moving deck in
simulation, indoor and outdoor environments, and compared to the other
available methods, it has provided the fastest landing approach. It does not
rely on additional external setup, such as RTK, motion capture system, ground
station, offboard processing, or communication with the vehicle, and it
requires only a minimal set of hardware and localization sensors. The videos
and source codes can be accessed from http://theairlab.org/landing-on-vehicle.Comment: 24 page
Mixed marker-based/marker-less visual odometry system for mobile robots
When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test
- …