20 research outputs found

    Neuromorphic Control using Input-Weighted Threshold Adaptation

    Full text link
    Neuromorphic processing promises high energy efficiency and rapid response rates, making it an ideal candidate for achieving autonomous flight of resource-constrained robots. It will be especially beneficial for complex neural networks as are involved in high-level visual perception. However, fully neuromorphic solutions will also need to tackle low-level control tasks. Remarkably, it is currently still challenging to replicate even basic low-level controllers such as proportional-integral-derivative (PID) controllers. Specifically, it is difficult to incorporate the integral and derivative parts. To address this problem, we propose a neuromorphic controller that incorporates proportional, integral, and derivative pathways during learning. Our approach includes a novel input threshold adaptation mechanism for the integral pathway. This Input-Weighted Threshold Adaptation (IWTA) introduces an additional weight per synaptic connection, which is used to adapt the threshold of the post-synaptic neuron. We tackle the derivative term by employing neurons with different time constants. We first analyze the performance and limits of the proposed mechanisms and then put our controller to the test by implementing it on a microcontroller connected to the open-source tiny Crazyflie quadrotor, replacing the innermost rate controller. We demonstrate the stability of our bio-inspired algorithm with flights in the presence of disturbances. The current work represents a substantial step towards controlling highly dynamic systems with neuromorphic algorithms, thus advancing neuromorphic processing and robotics. In addition, integration is an important part of any temporal task, so the proposed Input-Weighted Threshold Adaptation (IWTA) mechanism may have implications well beyond control tasks

    An Adaptive Control Strategy for Neural Network based Optimal Quadcopter Controllers

    Full text link
    Developing optimal controllers for aggressive high-speed quadcopter flight is a major challenge in the field of robotics. Recent work has shown that neural networks trained with supervised learning can achieve real-time optimal control in some specific scenarios. In these methods, the networks (termed G&CNets) are trained to learn the optimal state feedback from a dataset of optimal trajectories. An important problem with these methods is the reality gap encountered in the sim-to-real transfer. In this work, we trained G&CNets for energy-optimal end-to-end control on the Bebop drone and identified the unmodeled pitch moment as the main contributor to the reality gap. To mitigate this, we propose an adaptive control strategy that works by learning from optimal trajectories of a system affected by constant external pitch, roll and yaw moments. In real test flights, this model mismatch is estimated onboard and fed to the network to obtain the optimal rpm command. We demonstrate the effectiveness of our method by performing energy-optimal hover-to-hover flights with and without moment feedback. Finally, we compare the adaptive controller to a state-of-the-art differential-flatness-based controller in a consecutive waypoint flight and demonstrate the advantages of our method in terms of energy optimality and robustness.Comment: 7 pages, 11 figure

    End-to-end Reinforcement Learning for Time-Optimal Quadcopter Flight

    Full text link
    Aggressive time-optimal control of quadcopters poses a significant challenge in the field of robotics. The state-of-the-art approach leverages reinforcement learning (RL) to train optimal neural policies. However, a critical hurdle is the sim-to-real gap, often addressed by employing a robust inner loop controller -an abstraction that, in theory, constrains the optimality of the trained controller, necessitating margins to counter potential disturbances. In contrast, our novel approach introduces high-speed quadcopter control using end-to-end RL (E2E) that gives direct motor commands. To bridge the reality gap, we incorporate a learned residual model and an adaptive method that can compensate for modeling errors in thrust and moments. We compare our E2E approach against a state-of-the-art network that commands thrust and body rates to an INDI inner loop controller, both in simulated and real-world flight. E2E showcases a significant 1.39-second advantage in simulation and a 0.17-second edge in real-world testing, highlighting end-to-end reinforcement learning's potential. The performance drop observed from simulation to reality shows potential for further improvement, including refining strategies to address the reality gap or exploring offline reinforcement learning with real flight data.Comment: 6 pages, 6 figures, 1 tabl

    Replication Data for: "Enhancing optical flow-based control by learning visual appearance cues for flying robots"

    No full text
    This repository contains all data and code necessary to reproduce the experiments and figures in the article: "Enhancing optical flow-based control by learning visual appearance cues for flying robots". It allows to reproduce both the experiments in simulation and the real-world experiments with the Parrot Bebop 2 drone. Please see the README in the repository for a detailed explanation. Please note that the Paparazzi code included in this data set is subject to a GNU left license. See https://github.com/paparazzi/paparazzi/blob/master/LICENSE for more details

    Vision-Only Aircraft Flight Control

    Get PDF
    Presented at the 22nd Digital Avionics Systems Conference, Indianapolis, IN, October, 2003.Building aircraft with navigation and control systems that can complete flight tasks is complex, and often involves integrating information from multiple sensors to estimate the state of the vehicle. This paper describes a method, in which a glider can fly from a starting point to a predetermined end location (target) precisely using vision only. Using vision to control an aircraft represents a unique challenge, partially due to the high rate of images required in order to maintain tracking and to keep the glider on target in a moving air mass. Second, absolute distance and angle measurements to the target are not readily available when the glider does not have independent measurements of its own position. The method presented here uses an integral image representation of the video input for the analysis. The integral image, which is obtained by integrating the pixel intensities across the image, is reduced to a probable target location by performing a cascade of feature matching functions. The cascade is designed to eliminate the majority of the potential targets in a first pruning using computationally inexpensive process. Then, the more exact and computationally expensive processes are used on the few remaining candidates; thereby, dramatically decreasing the processing required per image. The navigation algorithms presented in this paper use a Kalman filter to estimate attitude and glideslope required based on measurements of the target in the image. The effectiveness of the algorithms is demonstrated through simulation of a small glider instrumented with only a simulated camera

    3 Lockheed Martin Assistant Professor of Avionics Integration

    No full text
    Building aircraft with navigation and control systems that can complete flight tasks is complex, and often involves integrating information from multiple sensors to estimate the state of the vehicle. This paper describes a method, in which a glider can fly from a starting point to a predetermined end location (target) precisely using vision only. Using vision to control an aircraft represents a unique challenge, partially due to the high rate of images required in order to maintain tracking and to keep the glider on target in a moving air mass. Second, absolute distance and angle measurements to the target are not readily available when the glider does not have independent measurements of its own position. The method presented here uses an integral image representation of the video input for the analysis. The integral image, which is obtained by integrating the pixel intensities across the image, is reduced to a probable target location by performing a cascade of feature matching functions. The cascade is designed to eliminate the majority of the potential targets in a first pruning using computationally inexpensive process. Then, the more exact and computationally expensive processes are used on the few remaining candidates; thereby, dramatically decreasing the processing required per image. The navigation algorithms presented in this paper use a Kalman filter to estimate attitude and glideslope required based on measurements of the target in the image. The effectiveness of the algorithms is demonstrated through simulation of a small glider instrumented with only a simulated camera. 1 Nomenclature ∆ angle between the camera centerline and the line through the center of the window δ actuator deflection γ track angle θ pitch angle φ roll angle ψ heading angle with respect to the desired flight pat
    corecore