1,181 research outputs found

    Stable Adaptive Control Using New Critic Designs

    Full text link
    Classical adaptive control proves total-system stability for control of linear plants, but only for plants meeting very restrictive assumptions. Approximate Dynamic Programming (ADP) has the potential, in principle, to ensure stability without such tight restrictions. It also offers nonlinear and neural extensions for optimal control, with empirically supported links to what is seen in the brain. However, the relevant ADP methods in use today -- TD, HDP, DHP, GDHP -- and the Galerkin-based versions of these all have serious limitations when used here as parallel distributed real-time learning systems; either they do not possess quadratic unconditional stability (to be defined) or they lead to incorrect results in the stochastic case. (ADAC or Q-learning designs do not help.) After explaining these conclusions, this paper describes new ADP designs which overcome these limitations. It also addresses the Generalized Moving Target problem, a common family of static optimization problems, and describes a way to stabilize large-scale economic equilibrium models, such as the old long-term energy model of DOE.Comment: Includes general reviews of alternative control technologies and reinforcement learning. 4 figs, >70p., >200 eqs. Implementation details, stability analysis. Included in 9/24/98 patent disclosure. pdf version uploaded 2012, based on direct conversion of the original word/html file, because of issues of format compatabilit
    • …
    corecore