In order to realize the online learning of a hybrid electric vehicle (HEV) control strategy, a fuzzy Q-learning (FQL) method is proposed in this paper. FQL control strategies consists of two parts: The optimal action-value function Q*(x,u) estimator network (QEN) and the fuzzy parameters tuning (FPT). A back propagation (BP) neural network is applied to estimate Q*(x,u) as QEN. For the fuzzy controller, we choose a Sugeno-type fuzzy inference system (FIS) and the parameters of the FIS are tuned online based on Q*(x,u). The action exploration modifier (AEM) is introduced to guarantee all actions are tried. The main advantage of a FQL control strategy is that it does not rely on prior information related to future driving conditions and can self-tune the parameters of the fuzzy controller online. The FQL control strategy has been applied to a HEV and simulation tests have been done. Simulation results indicate that the parameters of the fuzzy controller are tuned online and that a FQL control strategy achieves good performance in fuel economy
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.