6 research outputs found
Optimal Power Management Based on Q-Learning and Neuro-Dynamic Programming for Plug-in Hybrid Electric Vehicles
Energy optimization for plug-in hybrid electric vehicles (PHEVs) is a challenging problem
due to its system complexity and various constraints. In this research, we present
a Q-learning based in-vehicle model-free solution that can robustly converge to the optimal control. The proposed algorithms combine neuro-dynamic programming (NDP) with future trip information to effectively estimate the expected future energy cost (expected cost-to-go) for a given vehicle state and control actions. The convergence of those learning algorithms is demonstrated on both fixed and randomly selected drive cycles. Based on the characteristics of these learning algorithms, we propose a two-stage deployment solution for PHEV power management applications. We will also introduce a new initialization strategy that combines optimal learning with a properly selected penalty function. Such initialization can reduce the learning convergence time by 70%, which has huge impact on in-vehicle implementation. Finally, we develop a neural network (NN) for the battery state-of-charge (SoC) prediction, rendering our power management controller completely model-free.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/140754/1/Chang Liu Final Dissertation.pdfDescription of Chang Liu Final Dissertation.pdf : Dissertatio
A Novel Learning Based Model Predictive Control Strategy for Plug-in Hybrid Electric Vehicle
The multi-source electromechanical coupling renders energy management of plug-in hybrid electric vehicles (PHEVs) highly nonlinear and complex. Furthermore, the complicated nonlinear management process highly depends on knowledge of driving conditions, and hinders the control strategies efficiently applied instantaneously, leading to massive challenges in energy saving improvement of PHEVs. To address these issues, a novel learning based model predictive control (LMPC) strategy is developed for a serial-parallel PHEV with the reinforced optimal control effect in real time application. Rather than employing the velocity-prediction based MPC methods favored in the literature, an original reference-tracking based MPC solution is proposed with strong instant application capacity. To guarantee the optimal control effect, an online learning process is implemented in MPC via the Gaussian process (GP) model to address the uncertainties during state estimation. The tracking reference in LMPC based control problem in PHEV is achieved by a microscopic traffic flow analysis (MTFA) method. The simulation results validate that the proposed method can optimally manage energy flow within vehicle power sources in real time, highlighting its anticipated preferable performance
Optimal route design of electric transit networks considering travel reliability
Travel reliability is the most essential determinant for operating the transit system and improving its service level. In this study, an optimization model for the electric transit route network design problem is proposed, under the precondition that the locations of charging depots are predetermined. Objectives are to pursue maximum travel reliability and meanwhile control the total cost within a certain range. Constraints about the bus route and operation are also considered. A Reinforcement Learning Genetic Algorithm is developed to solve the proposed model. Two case studies including the classic Mandl\u27s road network and a large road network in the context of Zhengzhou city are conducted to demonstrate the effectiveness of the proposed model and the solution algorithm. Results suggest that the proposed methodology is helpful for improving the travel reliability of the transit network with minimal cost increase
Progress and summary of reinforcement learning on energy management of MPS-EV
The high emission and low energy efficiency caused by internal combustion
engines (ICE) have become unacceptable under environmental regulations and the
energy crisis. As a promising alternative solution, multi-power source electric
vehicles (MPS-EVs) introduce different clean energy systems to improve
powertrain efficiency. The energy management strategy (EMS) is a critical
technology for MPS-EVs to maximize efficiency, fuel economy, and range.
Reinforcement learning (RL) has become an effective methodology for the
development of EMS. RL has received continuous attention and research, but
there is still a lack of systematic analysis of the design elements of RL-based
EMS. To this end, this paper presents an in-depth analysis of the current
research on RL-based EMS (RL-EMS) and summarizes the design elements of
RL-based EMS. This paper first summarizes the previous applications of RL in
EMS from five aspects: algorithm, perception scheme, decision scheme, reward
function, and innovative training method. The contribution of advanced
algorithms to the training effect is shown, the perception and control schemes
in the literature are analyzed in detail, different reward function settings
are classified, and innovative training methods with their roles are
elaborated. Finally, by comparing the development routes of RL and RL-EMS, this
paper identifies the gap between advanced RL solutions and existing RL-EMS.
Finally, this paper suggests potential development directions for implementing
advanced artificial intelligence (AI) solutions in EMS