34,453 research outputs found

    Computer simulation of a pilot in V/STOL aircraft control loops

    Get PDF
    The objective was to develop a computerized adaptive pilot model for the computer model of the research aircraft, the Harrier II AV-8B V/STOL with special emphasis on propulsion control. In fact, two versions of the adaptive pilot are given. The first, simply called the Adaptive Control Model (ACM) of a pilot includes a parameter estimation algorithm for the parameters of the aircraft and an adaption scheme based on the root locus of the poles of the pilot controlled aircraft. The second, called the Optimal Control Model of the pilot (OCM), includes an adaption algorithm and an optimal control algorithm. These computer simulations were developed as a part of the ongoing research program in pilot model simulation supported by NASA Lewis from April 1, 1985 to August 30, 1986 under NASA Grant NAG 3-606 and from September 1, 1986 through November 30, 1988 under NASA Grant NAG 3-729. Once installed, these pilot models permitted the computer simulation of the pilot model to close all of the control loops normally closed by a pilot actually manipulating the control variables. The current version of this has permitted a baseline comparison of various qualitative and quantitative performance indices for propulsion control, the control loops and the work load on the pilot. Actual data for an aircraft flown by a human pilot furnished by NASA was compared to the outputs furnished by the computerized pilot and found to be favorable

    A Review of Traffic Signal Control.

    Get PDF
    The aim of this paper is to provide a starting point for the future research within the SERC sponsored project "Gating and Traffic Control: The Application of State Space Control Theory". It will provide an introduction to State Space Control Theory, State Space applications in transportation in general, an in-depth review of congestion control (specifically traffic signal control in congested situations), a review of theoretical works, a review of existing systems and will conclude with recommendations for the research to be undertaken within this project

    Disturbance Observer-based Robust Control and Its Applications: 35th Anniversary Overview

    Full text link
    Disturbance Observer has been one of the most widely used robust control tools since it was proposed in 1983. This paper introduces the origins of Disturbance Observer and presents a survey of the major results on Disturbance Observer-based robust control in the last thirty-five years. Furthermore, it explains the analysis and synthesis techniques of Disturbance Observer-based robust control for linear and nonlinear systems by using a unified framework. In the last section, this paper presents concluding remarks on Disturbance Observer-based robust control and its engineering applications.Comment: 12 pages, 4 figure

    Optimal adaptive control of time-delay dynamical systems with known and uncertain dynamics

    Get PDF
    Delays are found in many industrial pneumatic and hydraulic systems, and as a result, the performance of the overall closed-loop system deteriorates unless they are explicitly accounted. It is also possible that the dynamics of such systems are uncertain. On the other hand, optimal control of time-delay systems in the presence of known and uncertain dynamics by using state and output feedback is of paramount importance. Therefore, in this research, a suite of novel optimal adaptive control (OAC) techniques are undertaken for linear and nonlinear continuous time-delay systems in the presence of uncertain system dynamics using state and/or output feedback. First, the optimal regulation of linear continuous-time systems with state and input delays by utilizing a quadratic cost function over infinite horizon is addressed using state and output feedback. Next, the optimal adaptive regulation is extended to uncertain linear continuous-time systems under a mild assumption that the bounds on system matrices are known. Subsequently, the event-triggered optimal adaptive regulation of partially unknown linear continuous time systems with state-delay is addressed by using integral reinforcement learning (IRL). It is demonstrated that the optimal control policy renders asymptotic stability of the closed-loop system provided the linear time-delayed system is controllable and observable. The proposed event-triggered approach relaxed the need for continuous availability of state vector and proven to be zeno-free. Finally, the OAC using IRL neural network based control of uncertain nonlinear time-delay systems with input and state delays is investigated. An identifier is proposed for nonlinear time-delay systems to approximate the system dynamics and relax the need for the control coefficient matrix in generating the control policy. Lyapunov analysis is utilized to design the optimal adaptive controller, derive parameter/weight tuning law and verify stability of the closed-loop system”--Abstract, page iv
    corecore