55 research outputs found

    Design of a DDP controller for autonomous autorotative landing of RW UAV following engine failure

    Get PDF
    A dissertation submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Engineering. Johannesburg, April 2016A Rotary Wing Unmanned Aerial Vehicle (RW UAV) as a platform and its payload consisting of sophisticated sensors would be costly items. Hence, a RW UAV in the 500 kg class designed to fulfil a number of missions would represent a considerable capital outlay for any customer. Therefore, in the event of an engine failure, a means should be provided to get the craft safely back on the ground without incurring damage or causing danger to the surrounding area. The aim of the study was to design a controller for autorotative landing of a RW UAV in the event of engine failure. In order to design a controller for autorotative landing, an acceleration model was used obtained from a study by Stanford University. FLTSIM helicopter flight simulation package yielded necessary RW UAV response data for the autorotation regimes. The response data was utilized in identifying the unknown parameters in the acceleration model. A Differential Dynamic Programming (DDP) control algorithm was designed to compute the main and tail rotor collective pitch and the longitudinal and lateral cyclic pitch control inputs to safely land the craft. The results obtained were compared to the FLTSIM flight simulation response data. It was noted that the mathematical model could not accurately model the pitch dynamics. The main rotor dynamics were modelled satisfactorily and which are important in autorotation because without power from the engine, the energy in main rotor is critical in a successful execution of an autorotative landing. Stanford University designed a controller for RC helicopter, XCell Tempest, which was deemed successful. However, the DDP controller was designed for autonomous autorotative landing of RW UAV weighing 560 kg, following engine failure. The DDP controller has the ability to control the RW UAV in an autorotation landing but the study should be taken further to improve certain aspects such as the pitch dynamics and which can possibly be achieved through online parameter estimation.MT 201

    Handling Qualities Assessment of a Pilot Cueing System for Autorotation Maneuvers

    Get PDF
    This paper details the design and limited flight testing of a preliminary system for visual pilot cueing during autorotation maneuvers. The cueing system is based on a fully-autonomous, multi-phase autorotation control law that has been shown to successfully achieve autonomous autorotation landing in unmanned helicopters. To transition this control law to manned systems, it is employed within a cockpit display to drive visual markers which indicate desired collective pitch and longitudinal cyclic positions throughout the entire maneuver, from autorotation entry to touchdown. A series of simulator flight experiments performed at University of Liverpool’s HELIFLIGHT-R simulator are documented, in which pilots attempt autorotation with and without the pilot cueing system in both good and degraded visual environments. Performance of the pilot cueing system is evaluated based on both subjective pilot feedback and objective measurements of landing survivability metrics, demonstrating suitable preliminary performance of the system

    A Flight Dynamics Model for a Small-Scale Flybarless Helicopter

    Get PDF

    Fuzzy and neural control

    Get PDF
    Fuzzy logic and neural networks provide new methods for designing control systems. Fuzzy logic controllers do not require a complete analytical model of a dynamic system and can provide knowledge-based heuristic controllers for ill-defined and complex systems. Neural networks can be used for learning control. In this chapter, we discuss hybrid methods using fuzzy logic and neural networks which can start with an approximate control knowledge base and refine it through reinforcement learning

    RETROSPECTIVE AND EXPLORATORY ANALYSES FOR ENHANCING THE SAFETY OF ROTORCRAFT OPERATIONS

    Get PDF
    From recent safety reports, the accident rates associated with helicopter operations have reached a plateau and even have an increasing trend. More attention needs to be directed to this domain, and it was suggested to expand the use of flight data recorders on board for monitoring the operation. With the expected growth of flight data records in the coming years, it is essential to conduct analyses and provide the findings to the operator for risk mitigation. In this thesis, a retrospective analysis is proposed to detect potential anomalies in the fight data for rotorcraft operations. In the study, an algorithm is developed to detect the phases of flight for segmenting the flights into homogeneous entities. The anomaly detection is then performed on the flight segments within the same flight phases, and it is implemented through a sequential approach. Aside from the retrospective analysis, the exploratory analysis aims to efficiently find the safety envelope and predict the recovery actions for a hazardous event. To facilitate the exploration of the corresponding operational space, we provide a framework consisting of surrogate modeling and the design of experiments for tackling the tasks. In the study, the autorotation, a maneuver used to land the vehicle under power loss, is treated as a used case to test and validate the proposed framework.Ph.D

    Certification Considerations for Adaptive Systems

    Get PDF
    Advanced capabilities planned for the next generation of aircraft, including those that will operate within the Next Generation Air Transportation System (NextGen), will necessarily include complex new algorithms and non-traditional software elements. These aircraft will likely incorporate adaptive control algorithms that will provide enhanced safety, autonomy, and robustness during adverse conditions. Unmanned aircraft will operate alongside manned aircraft in the National Airspace (NAS), with intelligent software performing the high-level decision-making functions normally performed by human pilots. Even human-piloted aircraft will necessarily include more autonomy. However, there are serious barriers to the deployment of new capabilities, especially for those based upon software including adaptive control (AC) and artificial intelligence (AI) algorithms. Current civil aviation certification processes are based on the idea that the correct behavior of a system must be completely specified and verified prior to operation. This report by Rockwell Collins and SIFT documents our comprehensive study of the state of the art in intelligent and adaptive algorithms for the civil aviation domain, categorizing the approaches used and identifying gaps and challenges associated with certification of each approach

    Learning and Evolving Flight Controller for Fixed-Wing Unmanned Aerial Systems

    Get PDF
    Artificial intelligence has been called the fourth wave of industrialization following steam power, electricity, and computation. The field of aerospace engineering has been significantly impacted by this revolution, presenting the potential to build neural network-based high-performance autonomous flight systems. This work presents a novel application of machine learning technology to develop evolving neural network controllers for fixed-wing unmanned aerial systems. The hypothesis for an artificial neural network being capable of replacing a physics-based autopilot system consisting of guidance, navigation, and control, or a combination of these, is evaluated and proven through empirical experiments. Building upon widely use supervised learning methods and its variants, labeled data is generated leveraging non-zero set point linear quadratic regulator based autopilot systems to train neural network models, thereby developing a novel imitation learning algorithm. The ultimate goal of this research is to build a robust learning flight controller using low-cost and engineering level aircraft dynamic model and have the ability to evolve in time. Discovering the limitations of supervised learning methods, reinforcement learning techniques are employed to learn directly from data, breaking feedback correlations and dynamic model dependence for a control system. This manifests into a policy-based neural network controller that is robust towards un-modeled dynamics and uncertainty in aircraft dynamic model. To fundamentally change flight controller tuning practices, a unique evolution methodology is developed that directly uses flight data from a real aircraft: factual dynamic states and the rewards associated with them, in order to re-train a neural network controller. This work has the following unique contributions: 1. Novel imitation learning algorithms that mimic "expert" policy decisions using data aggregation are developed, which allow for unification of guidance and control algorithms into a single loop using artificial neural networks. 2. A time-based and dynamic model dependent moving window data aggregation algorithm is uniquely developed to accurately capture aircraft transient behavior and to mitigate neural network over-fitting, which caused low amplitude and low frequency oscillations in control predictions. 3. Due to substantial dependence of imitation learning algorithms on "expert" policies and physics-based flight controllers, reinforcement learning is used, which can train neural network controllers directly from data. Although, the developed neural network controller was trained using engineering level dynamic model of the aircraft with low-fidelity in low Reynold's numbers, it demonstrates unique capabilities to generalize a control policy in a series of flight tests and exhibits robustness to achieve the desired performance in presence of external disturbances (cross wind, gust, etc.). 4. In addition to extensive hardware in the loop simulations, this work was uniquely validated by actual flight tests on a foam-based, pusher, twin-boom Skyhunter aircraft. 5. Reliability and consistency of the longitudinal neural network controller is validated in 15 distinct flight tests, spread over a period of 5 months (November 2019 to March 2020), consisting of 21 different flight scenarios. Automatic flight missions are deployed to conduct a fair comparison of linear quadratic regulator and neural network controllers. 6. An evolution technique is developed to re-train artificial neural network flight controllers directly from flight data and mitigate dependence on aircraft dynamic models, using a modified Deep Deterministic Policy Gradients algorithm and is implemented via TensorFlow software to attain the goals of evolution

    A review of safe online learning for nonlinear control systems

    Get PDF
    Learning for autonomous dynamic control systems that can adapt to unforeseen environmental changes are of great interest but the realisation of a practical and safe online learning algorithm is incredibly challenging. This paper highlights some of the main approaches for safe online learning of stabilisable nonlinear control systems with a focus on safety certification for stability. We categorise a non-exhaustive list of salient techniques, with a focus on traditional control theory as opposed to reinforcement learning and approximate dynamic programming. This paper also aims to provide a simplified overview of techniques as an introduction to the field. It is the first paper to our knowledge that compares key attributes and advantages of each technique in one paper

    Optimization-based Estimation and Control Algorithms for Quadcopter Applications

    Get PDF

    Optimization-based Estimation and Control Algorithms for Quadcopter Applications

    Get PDF
    • …
    corecore