6,216 research outputs found

    Smart-Cities urban mobility management architecture for electric vehicles

    Get PDF
    Improving efficiency is one of the most important objectives of the Smart Cities standards, and Electric Vehicles (EVs). TICS can help to soften one of their main limitations –autonomy– planning efficient driving strategies. This paper evaluates the physical variables that have an impact on the EV consumption and presents an electronic architecture to monitor them in an Experimental Ultralight Electric Vehicle. This system includes a set of very low cost sensors integrated with a data logger and a GPRS transmission system that can connect, in real time, with a control center where a route-finding reinforcement-learning algorithm that helps to find the most-effective route and reduce the time spent on urban environment tips.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Progress and summary of reinforcement learning on energy management of MPS-EV

    Full text link
    The high emission and low energy efficiency caused by internal combustion engines (ICE) have become unacceptable under environmental regulations and the energy crisis. As a promising alternative solution, multi-power source electric vehicles (MPS-EVs) introduce different clean energy systems to improve powertrain efficiency. The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range. Reinforcement learning (RL) has become an effective methodology for the development of EMS. RL has received continuous attention and research, but there is still a lack of systematic analysis of the design elements of RL-based EMS. To this end, this paper presents an in-depth analysis of the current research on RL-based EMS (RL-EMS) and summarizes the design elements of RL-based EMS. This paper first summarizes the previous applications of RL in EMS from five aspects: algorithm, perception scheme, decision scheme, reward function, and innovative training method. The contribution of advanced algorithms to the training effect is shown, the perception and control schemes in the literature are analyzed in detail, different reward function settings are classified, and innovative training methods with their roles are elaborated. Finally, by comparing the development routes of RL and RL-EMS, this paper identifies the gap between advanced RL solutions and existing RL-EMS. Finally, this paper suggests potential development directions for implementing advanced artificial intelligence (AI) solutions in EMS

    Deep Reinforcement Learning DDPG Algorithm with AM based Transferable EMS for FCHEVs

    Get PDF
    Hydrogen fuel cell is used to run fuel cell hybrid electrical vehicles (FCHEVs). These FCHEVs are more efficient than vehicles based on conventional internal combustion engines due to no tailpipe emissions. FCHEVs emit water vapor and warm air. FCHEVs are demanding fast dynamic responses during acceleration and braking. To balance dynamic responsiveness, develop hybrid electric cars with fuel cell (FC) and auxiliary energy storage source batteries. This research paper discusses the development of an energy management strategy (EMS) for power-split FC-based hybrid electric cars using an algorithm called deep deterministic policy gradient (DDPG) which is based on deep reinforcement learning (DRL). DRL-based energy management techniques lack constraint capacity, learning speed, and convergence stability. To address these limitations proposes an action masking (AM) technique to stop the DDPG-based approach from producing incorrect actions that go against the system's physical limits and prevent them from being generated. In addition, the transfer learning (TL) approach of the DDPG-based strategy was investigated in order to circumvent the need for repetitive neural network training throughout the various driving cycles. The findings demonstrated that the suggested DDPG-based approach in conjunction with the AM method and TL method overcomes the limitations of current DRL-based approaches, providing an effective energy management system for power-split FCHEVs with reduced agent training time

    Real-Time Energy Management Strategy of a Fuel Cell Electric Vehicle With Global Optimal Learning

    Get PDF
    [EN] This article proposes a novel energy management strategy (EMS) for a fuel cell electric vehicle (FCEV). The strategy combines the offline optimization and online algorithms to guarantee optimal control, real-time performance, and better robustness in an unknown route. In particular, dynamic programming (DP) is applied in a database with multiple driving cycles to extract the theoretically optimal power split between the battery and fuel cell with a priori knowledge of the driving conditions. The analysis of the obtained results is then used to extract the rules to embed them in a real-time capable fuzzy controller. In this sense, at the expense of certain calibration effort in the offline phase with the DP results, the proposed strategy allows on-board applicability with suboptimal results. The proposed strategy has been tested in several actual driving cycles, and the results show energy savings between 8.48% and 10.71% in comparison to rule-based strategy and energy penalties between 1.04% and 3.37% when compared with the theoretical optimum obtained by DP. In addition, a sensitivity analysis shows that the proposed strategy can be adapted to different vehicle configurations. As the battery capacity increases, the performance can be further improved by 0.15% and 1.66% in conservative and aggressive driving styles, respectively.This work was supported in part by the National Natural Science Foundation of China under Grant 62111530196, in part by the Technology Development Program of Jilin Province under Grant 20210201111GX, and in part by the China Automobile Industry Innovation and Development Joint Fund under Grant U1864206.Hou, S.; Yin, H.; Pla Moreno, B.; Gao, J.; Chen, H. (2023). Real-Time Energy Management Strategy of a Fuel Cell Electric Vehicle With Global Optimal Learning. IEEE Transactions on Transportation Electrification (Online). 9(4):5085-5097. https://doi.org/10.1109/TTE.2023.3238101508550979

    Near-optimal energy management for plug-in hybrid fuel cell and battery propulsion using deep reinforcement learning

    Get PDF
    Plug-in hybrid fuel cell and battery propulsion systems appear promising for decarbonising transportation applications such as road vehicles and coastal ships. However, it is challenging to develop optimal or near-optimal energy management for these systems without exact knowledge of future load profiles. Although efforts have been made to develop strategies in a stochastic environment with discrete state space using Q-learning and Double Q-learning, such tabular reinforcement learning agents’ effectiveness is limited due to the state space resolution. This article aims to develop an improved energy management system using deep reinforcement learning to achieve enhanced cost-saving by extending discrete state parameters to be continuous. The improved energy management system is based upon the Double Deep Q-Network. Real-world collected stochastic load profiles are applied to train the Double Deep Q-Network for a coastal ferry. The results suggest that the Double Deep Q-Network acquired energy management strategy has achieved a further 5.5% cost reduction with a 93.8% decrease in training time, compared to that produced by the Double Q-learning agent in discrete state space without function approximations. In addition, this article also proposes an adaptive deep reinforcement learning energy management scheme for practical hybrid-electric propulsion systems operating in changing environments

    Stochastic model predictive control for energy management of power-split plug-in hybrid electric vehicles based on reinforcement learning

    Get PDF
    In this paper, a stochastic model predictive control (MPC) method based on reinforcement learning is proposed for energy management of plug-in hybrid electric vehicles (PHEVs). Firstly, the power transfer of each component in a power-split PHEV is described in detail. Then an effective and convergent reinforcement learning controller is trained by the Q-learning algorithm according to the driving power distribution under multiple driving cycles. By constructing a multi-step Markov velocity prediction model, the reinforcement learning controller is embedded into the stochastic MPC controller to determine the optimal battery power in predicted time domain. Numerical simulation results verify that the proposed method achieves superior fuel economy that is close to that by stochastic dynamic programming method. In addition, the effective state of charge tracking in terms of different reference trajectories highlight that the proposed method is effective for online application requiring a fast calculation speed
    • …
    corecore