12,496 research outputs found

    Development and Certification of a New Stall Warning and Avoidance System

    Get PDF
    Several methods may be employed to improve natural stall characteristics. The method employed on all learjets to obtain improved stall characteristics is a stall warning and avoidance system that employs angle of attack vanes, an electronic computer, a control column shaker motor, and a torquer which drives the control column in a pusher mode to avoid unwanted further buildup of angle of attack. The new system was developed with changes that improve system response with no performance penalty or increase in turbulence sensitivity. The following changes were made included modified system time constants and (alpha) time rate of change of vane angle dead zone and the addition of an alpha signal limiter and an alpha cut out below a specified angle of attack

    Robust Attitude Control of an Agile Aircraft Using Improved Q-Learning

    Get PDF
    Attitude control of a novel regional truss-braced wing (TBW) aircraft with low stability characteristics is addressed in this paper using Reinforcement Learning (RL). In recent years, RL has been increasingly employed in challenging applications, particularly, autonomous flight control. However, a significant predicament confronting discrete RL algorithms is the dimension limitation of the state-action table and difficulties in defining the elements of the RL environment. To address these issues, in this paper, a detailed mathematical model of the mentioned aircraft is first developed to shape an RL environment. Subsequently, Q-learning, the most prevalent discrete RL algorithm, will be implemented in both the Markov Decision Process (MDP) and Partially Observable Markov Decision Process (POMDP) frameworks to control the longitudinal mode of the proposed aircraft. In order to eliminate residual fluctuations that are a consequence of discrete action selection, and simultaneously track variable pitch angles, a Fuzzy Action Assignment (FAA) method is proposed to generate continuous control commands using the trained optimal Q-table. Accordingly, it will be proved that by defining a comprehensive reward function based on dynamic behavior considerations, along with observing all crucial states (equivalent to satisfying the Markov Property), the air vehicle would be capable of tracking the desired attitude in the presence of different uncertain dynamics including measurement noises, atmospheric disturbances, actuator faults, and model uncertainties where the performance of the introduced control system surpasses a well-tuned Proportional–Integral–Derivative (PID) controller

    Social Entrepreneurship Collaboratory: (SE Lab): A University Incubator for a Rising Generation of Leading Social Entrepreneurs

    Get PDF
    How can universities help create, develop and sustain a rising generation of social entrepreneurs and their ideas? What new forms of learning environments successfully integrate theory and practice? What conditions best support university students interested in studying, participating in, creating and developing social change organizations, thinking through their ideas, and connecting with their inspiration? What is the intellectual content and the rationale for a curriculum addressing this at a university

    volume 14, no. 1 (Spring 2009)

    Get PDF

    Reinforcement Learning to Control Lift Coefficient Using Distributed Sensors on a Wind Tunnel Model

    Get PDF
    Arrays of sensors distributed on the wing of fixed-wing vehicles can provide information not directly available to conventional sensor suites. These arrays of sensors have the potential to improve flight control and overall flight performance of small fixed-wing uninhabited aerial vehicles (UAVs). This work investigated the feasibility of estimating and controlling aerodynamic coefficients using the experimental readings of distributed pressure and strain sensors across a wing. The study was performed on a one degree-of-freedom model about pitch of a fixed-wing platform instrumented with the distributed sensing system. A series of reinforcement learning (RL) agents were trained in simulation for lift coefficient control, then validated in wind tunnel experiments. The performance of RL-based controllers with different sets of inputs in the observation space were compared with each other and with that of a manually tuned PID controller. Results showed that hybrid RL agents that used both distributed sensing data and conventional sensors performed best across the different tests.</p

    Deep Reinforcement Learning Attitude Control of Fixed-Wing UAVs Using Proximal Policy Optimization

    Full text link
    Contemporary autopilot systems for unmanned aerial vehicles (UAVs) are far more limited in their flight envelope as compared to experienced human pilots, thereby restricting the conditions UAVs can operate in and the types of missions they can accomplish autonomously. This paper proposes a deep reinforcement learning (DRL) controller to handle the nonlinear attitude control problem, enabling extended flight envelopes for fixed-wing UAVs. A proof-of-concept controller using the proximal policy optimization (PPO) algorithm is developed, and is shown to be capable of stabilizing a fixed-wing UAV from a large set of initial conditions to reference roll, pitch and airspeed values. The training process is outlined and key factors for its progression rate are considered, with the most important factor found to be limiting the number of variables in the observation vector, and including values for several previous time steps for these variables. The trained reinforcement learning (RL) controller is compared to a proportional-integral-derivative (PID) controller, and is found to converge in more cases than the PID controller, with comparable performance. Furthermore, the RL controller is shown to generalize well to unseen disturbances in the form of wind and turbulence, even in severe disturbance conditions.Comment: 11 pages, 3 figures, 2019 International Conference on Unmanned Aircraft Systems (ICUAS

    Spartan Daily December 5, 2011

    Get PDF
    Volume 137, Issue 50https://scholarworks.sjsu.edu/spartandaily/1104/thumbnail.jp

    Development and Deployment of a Dynamic Soaring Capable UAV using Reinforcement Learning

    Get PDF
    Dynamic soaring (DS) is a bio-inspired flight maneuver in which energy can be gained by flying through regions of vertical wind gradient such as the wind shear layer. With reinforcement learning (RL), a fixed wing unmanned aerial vehicle (UAV) can be trained to perform DS maneuvers optimally for a variety of wind shear conditions. To accomplish this task, a 6-degreesof- freedom (6DoF) flight simulation environment in MATLAB and Simulink has been developed which is based upon an off-the-shelf unmanned aerobatic glider. A combination of high-fidelity Reynolds-Averaged Navier-Stokes (RANS) computational fluid dynamics (CFD) in ANSYS Fluent and low-fidelity vortex lattice (VLM) method in Surfaces was employed to build a complete aerodynamic model of the UAV. Deep deterministic policy gradient (DDPG), an actor-critic RL algorithm, was used to train a closed-loop Path Following (PF) agent and an Unguided Energy- Seeking (UES) agent. Several generations of the PF agent were presented, with the final generation capable of controlling the climb and turn rate of the UAV to follow a closed-loop waypoint path with variable altitude. This must be paired with a waypoint optimizing agent to perform loitering DS. The UES agent was designed to perform traveling DS in a fixed wind shear condition. It was proven to extract energy from the wind shear to extend flight time during training but did not accomplish sustainable dynamic soaring. Further RL training is required for both agents. Recommendations on how to deploy an RL agent on a physical UAV are discussed
    • …
    corecore