3 research outputs found
Autonomous Navigation System for a Delivery Drone
The use of delivery services is an increasing trend worldwide, further
enhanced by the COVID pandemic. In this context, drone delivery systems are of
great interest as they may allow for faster and cheaper deliveries. This paper
presents a navigation system that makes feasible the delivery of parcels with
autonomous drones. The system generates a path between a start and a final
point and controls the drone to follow this path based on its localization
obtained through GPS, 9DoF IMU, and barometer. In the landing phase,
information of poses estimated by a marker (ArUco) detection technique using a
camera, ultra-wideband (UWB) devices, and the drone's software estimation are
merged by utilizing an Extended Kalman Filter algorithm to improve the landing
precision. A vector field-based method controls the drone to follow the desired
path smoothly, reducing vibrations or harsh movements that could harm the
transported parcel. Real experiments validate the delivery strategy and allow
to evaluate the performance of the adopted techniques. Preliminary results
state the viability of our proposal for autonomous drone delivery.Comment: 12 pages, 15 figures, extended version of an paper published at the
XXIII Brazilian Congress of Automatica, entitled "Desenvolvimento de um drone
aut\^onomo para tarefas de entrega de carga
Safe motion planning and learning for unmanned aerial systems
To control unmanned aerial systems, we rarely have a perfect system model. Safe and aggressive planning is also challenging for nonlinear and under-actuated systems. Expert pilots, however, demonstrate maneuvers that are deemed at the edge of plane envelope. Inspired by biological systems, in this paper, we introduce a framework that leverages methods in the field of control theory and reinforcement learning to generate feasible, possibly aggressive, trajectories. For the control policies, Dynamic Movement Primitives (DMPs) imitate pilot-induced primitives, and DMPs are combined in parallel to generate trajectories to reach original or different goal points. The stability properties of DMPs and their overall systems are analyzed using contraction theory. For reinforcement learning, Policy Improvement with Path Integrals (PI2) was used for the maneuvers. The results in this paper show that PI2 updated policies are a feasible and parallel combination of different updated primitives transfer the learning in the contraction regions. Our proposed methodology can be used to imitate, reshape, and improve feasible, possibly aggressive, maneuvers. In addition, we can exploit trajectories generated by optimization methods, such as Model Predictive Control (MPC), and a library of maneuvers can be instantly generated. For application, 3-DOF (degrees of freedom) Helicopter and 2D-UAV (unmanned aerial vehicle) models are utilized to demonstrate the main results