3 research outputs found

    A Survey of path following control strategies for UAVs focused on quadrotors

    Get PDF
    The trajectory control problem, defined as making a vehicle follow a pre-established path in space, can be solved by means of trajectory tracking or path following. In the trajectory tracking problem a timed reference position is tracked. The path following approach removes any time dependence of the problem, resulting in many advantages on the control performance and design. An exhaustive review of path following algorithms applied to quadrotor vehicles has been carried out, the most relevant are studied in this paper. Then, four of these algorithms have been implemented and compared in a quadrotor simulation platform: Backstepping and Feedback Linearisation control-oriented algorithms and NLGL and Carrot-Chasing geometric algorithms.Peer ReviewedPostprint (author's final draft

    Sample efficient learning of path following and obstacle avoidance behavior for Quadrotors

    No full text
    In this letter, we propose an algorithm for the training of neural network control policies for quadrotors. The learned control policy computes control commands directly from sensor inputs and is, hence, computationally efficient. An imitation learning algorithm produces a policy that reproduces the behavior of a supervisor. The supervisor provides demonstrations of path following and collision avoidance maneuvers. Due to the generalization ability of neural networks, the resulting policy performs local collision avoidance, while following a global reference path. The algorithm uses a time-free model-predictive path-following controller as a supervisor. The controller generates demonstrations by following few example paths. This enables an easy-to-implement learning algorithm that is robust to errors of the model used in the model-predictive controller. The policy is trained on the real quadrotor, which requires collision-free exploration around the example path. An adapted version of the supervisor is used to enable exploration. Thus, the policy can be trained from a relatively small number of examples on the real quadrotor, making the training sample efficientGreen Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Learning & Autonomous Contro

    Sample Efficient Learning of Path Following and Obstacle Avoidance Behavior for Quadrotors

    No full text
    corecore