2,831 research outputs found

    Integrated Thermal and Energy Management of Connected Hybrid Electric Vehicles Using Deep Reinforcement Learning

    Get PDF
    The climate-adaptive energy management system holds promising potential for harnessing the concealed energy-saving capabilities of connected plug-in hybrid electric vehicles. This research focuses on exploring the synergistic effects of artificial intelligence control and traffic preview to enhance the performance of the energy management system (EMS). A high-fidelity model of a multi-mode connected PHEV is calibrated using experimental data as a foundation. Subsequently, a model-free multistate deep reinforcement learning (DRL) algorithm is proposed to develop the integrated thermal and energy management (ITEM) system, incorporating features of engine smart warm-up and engine-assisted heating for cold climate conditions. The optimality and adaptability of the proposed system is evaluated through both offline tests and online hardware-in-the-loop tests, encompassing a homologation driving cycle and a real-world driving cycle in China with real-time traffic data. The results demonstrate that ITEM achieves a close to dynamic programming fuel economy performance with a margin of 93.7%, while reducing fuel consumption ranging from 2.2% to 9.6% as ambient temperature decreases from 15°C to -15°C in comparison to state-of-the-art DRL-based EMS solutions

    Data-Driven Transferred Energy Management Strategy for Hybrid Electric Vehicles via Deep Reinforcement Learning

    Full text link
    Real-time applications of energy management strategies (EMSs) in hybrid electric vehicles (HEVs) are the harshest requirements for researchers and engineers. Inspired by the excellent problem-solving capabilities of deep reinforcement learning (DRL), this paper proposes a real-time EMS via incorporating the DRL method and transfer learning (TL). The related EMSs are derived from and evaluated on the real-world collected driving cycle dataset from Transportation Secure Data Center (TSDC). The concrete DRL algorithm is proximal policy optimization (PPO) belonging to the policy gradient (PG) techniques. For specification, many source driving cycles are utilized for training the parameters of deep network based on PPO. The learned parameters are transformed into the target driving cycles under the TL framework. The EMSs related to the target driving cycles are estimated and compared in different training conditions. Simulation results indicate that the presented transfer DRL-based EMS could effectively reduce time consumption and guarantee control performance.Comment: 25 pages, 12 figure

    Eco-Driving Optimization Based on Variable Grid Dynamic Programming and Vehicle Connectivity in a Real-World Scenario

    Get PDF
    In a context in which the connectivity level of last-generation vehicles is constantly onthe rise, the combined use of Vehicle-To-Everything (V2X) connectivity and autonomous drivingcan provide remarkable benefits through the synergistic optimization of the route and the speedtrajectory. In this framework, this paper focuses on vehicle ecodriving optimization in a connectedenvironment: the virtual test rig of a premium segment passenger car was used for generatingthe simulation scenarios and to assess the benefits, in terms of energy and time savings, that theintroduction of V2X communication, integrated with cloud computing, can have in a real-worldscenario. The Reference Scenario is a predefined Real Driving Emissions (RDE) compliant route,while the simulation scenarios were generated by assuming two different penetration levels of V2Xtechnologies. The associated energy minimization problem was formulated and solved by means of aVariable Grid Dynamic Programming (VGDP), that modifying the variable state search grid on thebasis of the V2X information allows to drastically reduce the DP computation burden by more than95%. The simulations show that introducing a smart infrastructure along with optimizing the vehiclespeed in a real-world urban route can potentially reduce the required energy by 54% while shorteningthe travel time by 38%. Finally, a sensitivity analysis was performed on the biobjective optimizationcost function to find a set of Pareto optimal solutions, between energy and travel time minimization

    Development of a neural network-based energy management system for a plug-in hybrid electric vehicle

    Get PDF
    The high potential of Artificial Intelligence (AI) techniques for effectively solving complex parameterization tasks also makes them extremely attractive for the design of the Energy Management Systems (EMS) of Hybrid Electric Vehicles (HEVs). In this framework, this paper aims to design an EMS through the exploitation of deep learning techniques, which allow high non-linear relationships among the data characterizing the problem to be described. In particular, the deep learning model was designed employing two different Recurrent Neural Networks (RNNs). First, a previously developed digital twin of a state-of-the-art plug-in HEV was used to generate a wide portfolio of Real Driving Emissions (RDE) compliant vehicle missions and traffic scenarios. Then, the AI models were trained off-line to achieve CO2 emissions minimization providing the optimal solutions given by a global optimization control algorithm, namely Dynamic Programming (DP). The proposed methodology has been tested on a virtual test rig and it has been proven capable of achieving significant improvements in terms of fuel economy for both charge-sustaining and charge-depleting strategies, with reductions of about 4% and 5% respectively if compared to the baseline Rule-Based (RB) strategy

    Online Battery Protective Energy Management for Energy-Transportation Nexus

    Get PDF

    Deep Reinforcement Learning DDPG Algorithm with AM based Transferable EMS for FCHEVs

    Get PDF
    Hydrogen fuel cell is used to run fuel cell hybrid electrical vehicles (FCHEVs). These FCHEVs are more efficient than vehicles based on conventional internal combustion engines due to no tailpipe emissions. FCHEVs emit water vapor and warm air. FCHEVs are demanding fast dynamic responses during acceleration and braking. To balance dynamic responsiveness, develop hybrid electric cars with fuel cell (FC) and auxiliary energy storage source batteries. This research paper discusses the development of an energy management strategy (EMS) for power-split FC-based hybrid electric cars using an algorithm called deep deterministic policy gradient (DDPG) which is based on deep reinforcement learning (DRL). DRL-based energy management techniques lack constraint capacity, learning speed, and convergence stability. To address these limitations proposes an action masking (AM) technique to stop the DDPG-based approach from producing incorrect actions that go against the system's physical limits and prevent them from being generated. In addition, the transfer learning (TL) approach of the DDPG-based strategy was investigated in order to circumvent the need for repetitive neural network training throughout the various driving cycles. The findings demonstrated that the suggested DDPG-based approach in conjunction with the AM method and TL method overcomes the limitations of current DRL-based approaches, providing an effective energy management system for power-split FCHEVs with reduced agent training time
    • …
    corecore