1,133 research outputs found

    Application of Adaptive Autopilot Designs for an Unmanned Aerial Vehicle

    Get PDF
    This paper summarizes the application of two adaptive approaches to autopilot design, and presents an evaluation and comparison of the two approaches in simulation for an unmanned aerial vehicle. One approach employs two-stage dynamic inversion and the other employs feedback dynamic inversions based on a command augmentation system. Both are augmented with neural network based adaptive elements. The approaches permit adaptation to both parametric uncertainty and unmodeled dynamics, and incorporate a method that permits adaptation during periods of control saturation. Simulation results for an FQM-117B radio controlled miniature aerial vehicle are presented to illustrate the performance of the neural network based adaptation

    Hidden and self-exited attractors in a heterogeneous Cournot oligopoly model

    Full text link
    In this paper it is numerically proved that a heterogeneous Cournot oligopoly model presents hidden and self-excited attractors. The system has a single equilibrium and a line of equilibria. The bifurcation diagrams show that the system admits several attractors coexistence windows, where the hidden attractors can be found. Intensive numerical tests have been done

    Flight controller synthesis via deep reinforcement learning

    Get PDF
    Traditional control methods are inadequate in many deployment settings involving autonomous control of Cyber-Physical Systems (CPS). In such settings, CPS controllers must operate and respond to unpredictable interactions, conditions, or failure modes. Dealing with such unpredictability requires the use of executive and cognitive control functions that allow for planning and reasoning. Motivated by the sport of drone racing, this dissertation addresses these concerns for state-of-the-art flight control by investigating the use of deep artificial neural networks to bring essential elements of higher-level cognition to bear on the design, implementation, deployment, and evaluation of low level (attitude) flight controllers. First, this thesis presents a feasibility analyses and results which confirm that neural networks, trained via reinforcement learning, are more accurate than traditional control methods used by commercial uncrewed aerial vehicles (UAVs) for attitude control. Second, armed with these results, this thesis reports on the development and release of an open source, full solution stack for building neuro-flight controllers. This stack consists of a tuning framework for implementing training environments (GymFC) and firmware for the world’s first neural network supported flight controller (Neuroflight). GymFC’s novel approach fuses together the digital twinning paradigm with flight control training to provide seamless transfer to hardware. Third, to transfer models synthesized by GymFC to hardware, this thesis reports on the toolchain that has been released for compiling neural networks into Neuroflight, which can be flashed to off-the-shelf microcontrollers. This toolchain includes detailed procedures for constructing a multicopter digital twin to allow the research and development community to synthesize flight controllers unique to their own aircraft. Finally, this thesis examines alternative reward system functions as well as changes to the software environment to bridge the gap between simulation and real world deployment environments. The design, evaluation, and experimental work summarized in this thesis demonstrates that deep reinforcement learning is able to be leveraged for the design and implementation of neural network controllers capable not only of maintaining stable flight, but also precision aerobatic maneuvers in real world settings. As such, this work provides a foundation for developing the next generation of flight control systems

    Reinforcement Learning to Control Lift Coefficient Using Distributed Sensors on a Wind Tunnel Model

    Get PDF
    Arrays of sensors distributed on the wing of fixed-wing vehicles can provide information not directly available to conventional sensor suites. These arrays of sensors have the potential to improve flight control and overall flight performance of small fixed-wing uninhabited aerial vehicles (UAVs). This work investigated the feasibility of estimating and controlling aerodynamic coefficients using the experimental readings of distributed pressure and strain sensors across a wing. The study was performed on a one degree-of-freedom model about pitch of a fixed-wing platform instrumented with the distributed sensing system. A series of reinforcement learning (RL) agents were trained in simulation for lift coefficient control, then validated in wind tunnel experiments. The performance of RL-based controllers with different sets of inputs in the observation space were compared with each other and with that of a manually tuned PID controller. Results showed that hybrid RL agents that used both distributed sensing data and conventional sensors performed best across the different tests.</p

    CEAS/AIAA/ICASE/NASA Langley International Forum on Aeroelasticity and Structural Dynamics 1999

    Get PDF
    These proceedings represent a collection of the latest advances in aeroelasticity and structural dynamics from the world community. Research in the areas of unsteady aerodynamics and aeroelasticity, structural modeling and optimization, active control and adaptive structures, landing dynamics, certification and qualification, and validation testing are highlighted in the collection of papers. The wide range of results will lead to advances in the prediction and control of the structural response of aircraft and spacecraft

    Nonlinear Adaptive Dynamic Inversion Control for Variable Stability Small Unmanned Aircraft Systems

    Get PDF
    In-flight simulation and variable stability aircraft provide useful capabilities for flight controls development such as testing control laws for new aircraft earlier, identification of adverse conditions such as pilot-induced oscillations, and handling qualities research. While these capabilities are useful they are not without cost. The expense and support activities needed to safely operate in-flight simulators has limited their availability to military test pilot schools and a few private companies. Modern computing power allows the implementation of advanced flight control systems on size, weight, and power constrained platforms such as small uninhabited aerial systems used by universities and research organizations. This thesis aims to develop a flight control system that brings in-flight simulation capability to these platforms. Two control systems based on model reference and L₁ adaptive augmentation of baseline nonlinear dynamic inversion controllers are proposed and evaluated against a command augmentation system design and in-flight simulation cases for a variety of linear and nonlinear models. Simulation results demonstrate that both proposed control architectures are able to meet the control objectives for tracking and in-flight simulation and performance and stability robustness in the presence of severe turbulence
    corecore