research

Adaptive Control with Approximated Policy Search Approach

Abstract

Most of existing adaptive control schemes are designed to minimize error between plant state and goal state despite the fact that executing actions that are predicted to result in smaller errors only can mislead to non-goal states. We develop an adaptive control scheme that involves manipulating a controller of a general type to improve its performance as measured by an evaluation function. The developed method is closely related to a theory of Reinforcement Learning (RL) but imposes a practical assumption made for faster learning. We assume that a value function of RL can be approximated by a function of Euclidean distance from a goal state and an action executed at the state. And, we propose to use it for the gradient search as an evaluation function. Simulation results provided through application of the proposed scheme to a pole -balancing problem using a linear state feedback controller and fuzzy controller verify the scheme's efficacy

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 19/08/2017