4 research outputs found

    Evaluation-Function-based Model-free Adaptive Fuzzy Control

    Get PDF
    Designs of adaptive fuzzy controllers (AFC) are commonly based on the Lyapunov approach, which requires a known model of the controlled plant. They need to consider a Lyapunov function candidate as an evaluation function to be minimized. In this study these drawbacks were handled by designing a model-free adaptive fuzzy controller (MFAFC) using an approximate evaluation function defined in terms of the current state, the next state, and the control action. MFAFC considers the approximate evaluation function as an evaluative control performance measure similar to the state-action value function in reinforcement learning. The simulation results of applying MFAFC to the inverted pendulum benchmark verified the proposed scheme\u27s efficacy

    Evaluation-Function-based Model-free Adaptive Fuzzy Control

    Get PDF
    Designs of adaptive fuzzy controllers (AFC) are commonly based on the Lyapunov approach, which requires a known model of the controlled plant. They need to consider a Lyapunov function candidate as an evaluation function to be minimized. In this study these drawbacks were handled by designing a model-free adaptive fuzzy controller (MFAFC) using an approximate evaluation function defined in terms of the current state, the next state, and the control action. MFAFC considers the approximate evaluation function as an evaluative control performance measure similar to the state-action value function in reinforcement learning. The simulation results of applying MFAFC to the inverted pendulum benchmark veriï¬ed the proposed scheme's efficacy

    Adaptive Control with Approximated Policy Search Approach

    Full text link
    Most of existing adaptive control schemes are designed to minimize error between plant state and goal state despite the fact that executing actions that are predicted to result in smaller errors only can mislead to non-goal states. We develop an adaptive control scheme that involves manipulating a controller of a general type to improve its performance as measured by an evaluation function. The developed method is closely related to a theory of Reinforcement Learning (RL) but imposes a practical assumption made for faster learning. We assume that a value function of RL can be approximated by a function of Euclidean distance from a goal state and an action executed at the state. And, we propose to use it for the gradient search as an evaluation function. Simulation results provided through application of the proposed scheme to a pole -balancing problem using a linear state feedback controller and fuzzy controller verify the scheme's efficacy

    Evaluation-Function-based Model-free Adaptive Fuzzy Control

    Full text link
    corecore