1,624 research outputs found

    Energy Management System Design for Fuel Cell Vehicles

    Get PDF
    Fuel cell vehicles combine the benefits of fuel cell stacks and energy storage systems to achieve fuel economy and zero emission. Energy management systems are vital to fuel cell vehicles in fuel economy and system durability since it determines the distribution of power from the fuel cell stack and energy storage system. In this thesis, we propose three novel energy management system designs for fuel cell vehicles to improve the vehicle energy system stability, optimality and durability. We first present a non-myopic energy management system for controlling multiple energy flows in fuel cell hybrid vehicles. The control problem is solved by convex programming under a partially observable Markov decision process based framework. We propose an average-reward approximator to estimate a long-term average cost instead of using a model to predict future power demand. Thus, the dependency between the system closed-loop performance and the model accuracy for predicting the future power demand is decoupled in the energy management design for fuel cell vehicles. The energy management scheme consists of a real-time self-learning system, an average-reward filter based on the Markov chain Monte Carlo sampling, and an action selector system through the rollout algorithm with convex programming based policy. The performance evaluation of the energy management strategy is conducted via simulation studies using data obtained from real-world driving experiments and its performance is compared with three benchmark schemes. To increase the applicability of the energy management system to various driving scenarios and multiple drivers, we propose an energy management scheme in fuel cell vehicle systems. The energy management problem is cast in the form of a nonlinear infinite-time optimisation problem. A model-based fuzzy control method is employed to design the control law. By linear matrix inequality approach, sufficient conditions are proposed to design the control strategy such that the energy system is robustly stable with a desired mixed H₂/H∞ performance. The effectiveness and potential of the new design technique developed are demonstrated by different real-world driving scenarios. By using optimal control principle, we further improve the energy management system performance in terms of reducing hydrogen consumption while maintaining the battery state of charge under practical operating constraints and uncertain future power demand. The fuzzy modelling approach is employed to describe the nonlinear power plant and a robust model predictive based control is designed to achieve the desired system performance. Moreover, traffic condition is incorporated into the energy management controller design to further improve the system performance. The effectiveness and advantages of the proposed control scheme are illustrated by a simulator developed based on real-world experimental data. Finally, we investigate the problem of controlling energy flow in fuel cell vehicles by considering system stability, optimality, and durability. The energy management problem is transformed into a nonlinear optimisation problem with multi-objectives to improve fuel economy, maintain battery state of charge, and reduce the incidence of factors affecting the fuel cell performance degradation. A robust model-predictive-based fuzzy control method is employed to design the nonlinear control law. The energy management system is capable of coordinating with a fuel cell stack state of health estimator and an energy storage system scheduler to achieve the optimisation objectives in the presence of uncertainty of the driver’s power demand. The effectiveness of the new design technique developed is demonstrated by conducting studies on control performance over typical urban/highway driving scenarios.Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 202

    Multi-Agent Reinforcement Learning for Connected and Automated Vehicles Control: Recent Advancements and Future Prospects

    Full text link
    Connected and automated vehicles (CAVs) have emerged as a potential solution to the future challenges of developing safe, efficient, and eco-friendly transportation systems. However, CAV control presents significant challenges, given the complexity of interconnectivity and coordination required among the vehicles. To address this, multi-agent reinforcement learning (MARL), with its notable advancements in addressing complex problems in autonomous driving, robotics, and human-vehicle interaction, has emerged as a promising tool for enhancing the capabilities of CAVs. However, there is a notable absence of current reviews on the state-of-the-art MARL algorithms in the context of CAVs. Therefore, this paper delivers a comprehensive review of the application of MARL techniques within the field of CAV control. The paper begins by introducing MARL, followed by a detailed explanation of its unique advantages in addressing complex mobility and traffic scenarios that involve multiple agents. It then presents a comprehensive survey of MARL applications on the extent of control dimensions for CAVs, covering critical and typical scenarios such as platooning control, lane-changing, and unsignalized intersections. In addition, the paper provides a comprehensive review of the prominent simulation platforms used to create reliable environments for training in MARL. Lastly, the paper examines the current challenges associated with deploying MARL within CAV control and outlines potential solutions that can effectively overcome these issues. Through this review, the study highlights the tremendous potential of MARL to enhance the performance and collaboration of CAV control in terms of safety, travel efficiency, and economy

    Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control

    Get PDF
    The real-time control optimization of electrified vehicles is one of the most demanding tasks to be faced in the innovation progress of low-emissions mobility. Intelligent energy management systems represent interesting solutions to solve complex control problems, such as the maximization of the fuel economy of hybrid electric vehicles. In the recent years, reinforcement-learning-based controllers have been shown to outperform well-established real-time strategies for specific applications. Nevertheless, the effects produced by variation in the reward function have not been thoroughly analyzed and the potential of the adoption of a given RL agent under different testing conditions is still to be assessed. In the present paper, the performance of different agents, i.e., Q-learning, deep Q-Network and double deep Q-Network, are investigated considering a full hybrid electric vehicle throughout multiple driving missions and introducing two distinct reward functions. The first function aims at guaranteeing a charge-sustaining policy whilst reducing the fuel consumption (FC) as much as possible; the second function in turn aims at minimizing the fuel consumption whilst ensuring an acceptable battery state of charge (SOC) by the end of the mission. The novelty brought by the results of this paper lies in the demonstration of a non-trivial incapability of DQN and DDQN to outperform traditional Q-learning when a SOC-oriented reward is considered. On the contrary, optimal fuel consumption reductions are attained by DQN and DDQN when more complex FC-oriented minimization is deployed. Such an important outcome is particularly evident when the RL agents are trained on regulatory driving cycles and tested on unknown real-world driving missions

    Model-free non-invasive health assessment for battery energy storage assets

    Get PDF
    Increasing penetration of renewable energy generation in the modern power network introduces uncertainty about the energy available to maintain a balance between generation and demand due to its time-fluctuating output that is strongly dependent on the weather. With the development of energy storage technology, there is the potential for this technology to become a key element to help overcome this intermittency in a generation. However, the increasing penetration of battery energy storage within the power network introduces an additional challenge to asset owners on how to monitor and manage battery health. The accurate estimation of the health of this device is crucial in determining its reliability, power-delivering capability and ability to contribute to the operation of the whole power system. Generally, doing this requires invasive measurements or computationally expensive physics-based models, which do not scale up cost-effectively to a fleet of assets. As storage aggregation becomes more commonplace, there is a need for a health metric that will be able to predict battery health based only on the limited information available, eliminating the necessity of installation of extensive telemetry in the system. This work develops a solution to battery health prognostics by providing an alternative, a non-invasive approach to the estimation of battery health that estimates the extent to which a battery asset has been maloperated based only on the battery-operating regime imposed on the device. The model introduced in this work is based on the Hidden Markov Model, which stochastically models the battery limitations imposed by its chemistry as a combination of present and previous sequential charging actions, and articulates the preferred operating regime as a measure of health consequence. The resulting methodology is demonstrated on distribution network level electrical demand and generation data, accurately predicting maloperation under a number of battery technology scenarios. The effectiveness of the proposed battery maloperation model as a proxy for actual battery degradation for lithium-ion technology was also tested against lab tested battery degradation data, showing that the proposed health measure in terms of maloperation level reflected that measured in terms of capacity fade. The developed model can support condition monitoring and remaining useful life estimates, but in the wider context could also be used as the policy function in an automated scheduler to utilise assets while optimising their health.Increasing penetration of renewable energy generation in the modern power network introduces uncertainty about the energy available to maintain a balance between generation and demand due to its time-fluctuating output that is strongly dependent on the weather. With the development of energy storage technology, there is the potential for this technology to become a key element to help overcome this intermittency in a generation. However, the increasing penetration of battery energy storage within the power network introduces an additional challenge to asset owners on how to monitor and manage battery health. The accurate estimation of the health of this device is crucial in determining its reliability, power-delivering capability and ability to contribute to the operation of the whole power system. Generally, doing this requires invasive measurements or computationally expensive physics-based models, which do not scale up cost-effectively to a fleet of assets. As storage aggregation becomes more commonplace, there is a need for a health metric that will be able to predict battery health based only on the limited information available, eliminating the necessity of installation of extensive telemetry in the system. This work develops a solution to battery health prognostics by providing an alternative, a non-invasive approach to the estimation of battery health that estimates the extent to which a battery asset has been maloperated based only on the battery-operating regime imposed on the device. The model introduced in this work is based on the Hidden Markov Model, which stochastically models the battery limitations imposed by its chemistry as a combination of present and previous sequential charging actions, and articulates the preferred operating regime as a measure of health consequence. The resulting methodology is demonstrated on distribution network level electrical demand and generation data, accurately predicting maloperation under a number of battery technology scenarios. The effectiveness of the proposed battery maloperation model as a proxy for actual battery degradation for lithium-ion technology was also tested against lab tested battery degradation data, showing that the proposed health measure in terms of maloperation level reflected that measured in terms of capacity fade. The developed model can support condition monitoring and remaining useful life estimates, but in the wider context could also be used as the policy function in an automated scheduler to utilise assets while optimising their health

    Development of a neural network-based energy management system for a plug-in hybrid electric vehicle

    Get PDF
    The high potential of Artificial Intelligence (AI) techniques for effectively solving complex parameterization tasks also makes them extremely attractive for the design of the Energy Management Systems (EMS) of Hybrid Electric Vehicles (HEVs). In this framework, this paper aims to design an EMS through the exploitation of deep learning techniques, which allow high non-linear relationships among the data characterizing the problem to be described. In particular, the deep learning model was designed employing two different Recurrent Neural Networks (RNNs). First, a previously developed digital twin of a state-of-the-art plug-in HEV was used to generate a wide portfolio of Real Driving Emissions (RDE) compliant vehicle missions and traffic scenarios. Then, the AI models were trained off-line to achieve CO2 emissions minimization providing the optimal solutions given by a global optimization control algorithm, namely Dynamic Programming (DP). The proposed methodology has been tested on a virtual test rig and it has been proven capable of achieving significant improvements in terms of fuel economy for both charge-sustaining and charge-depleting strategies, with reductions of about 4% and 5% respectively if compared to the baseline Rule-Based (RB) strategy

    Project and Development of a Reinforcement Learning Based Control Algorithm for Hybrid Electric Vehicles

    Get PDF
    Hybrid electric vehicles are, nowadays, considered as one of the most promising technologies for reducing on-road greenhouse gases and pollutant emissions. Such a goal can be accomplished by developing an intelligent energy management system which could lead the powertrain to exploit its maximum energetic performances under real-world driving conditions. According to the latest research in the field of control algorithms for hybrid electric vehicles, Reinforcement Learning has emerged between several Artificial Intelligence approaches as it has proved to retain the capability of producing near-optimal solutions to the control problem even in real-time conditions. Nevertheless, an accurate design of both agent and environment is needed for this class of algorithms. Within this paper, a detailed plan for the complete project and development of an energy management system based on Q-learning for hybrid powertrains is discussed. An integrated modular software framework for co-simulation has been developed and it is thoroughly described. Finally, results have been presented about a massive testing of the agent aimed at assessing for the change in its performance when different training parameters are considered

    Intelligent Transportation Systems, Hybrid Electric Vehicles, Powertrain Control, Cooperative Adaptive Cruise Control, Model Predictive Control

    Get PDF
    Information obtainable from Intelligent Transportation Systems (ITS) provides the possibility of improving the safety and efficiency of vehicles at different levels. In particular, such information has the potential to be utilized for prediction of driving conditions and traffic flow, which allows us to improve the performance of the control systems in different vehicular applications, such as Hybrid Electric Vehicles (HEVs) powertrain control and Cooperative Adaptive Cruise Control (CACC). In the first part of this work, we study the design of an MPC controller for a Cooperative Adaptive Cruise Control (CACC) system, which is an automated application that provides the drivers with extra benefits, such as traffic throughput maximization and collision avoidance. CACC systems must be designed in a way that are sufficiently robust against all special maneuvers such as interfering vehicles cutting-into the CACC platoons or hard braking by leading cars. To address this problem, we first propose a Neural- Network (NN)-based cut-in detection and trajectory prediction scheme. Then, the predicted trajectory of each vehicle in the adjacent lanes is used to estimate the probability of that vehicle cutting-into the CACC platoon. To consider the calculated probability in control system decisions, a Stochastic Model Predictive Controller (SMPC) needs to be designed which incorporates this cut-in probability, and enhances the reaction against the detected dangerous cut-in maneuver. However, in this work, we propose an alternative way of solving this problem. We convert the SMPC problem into modeling the CACC as a Stochastic Hybrid System (SHS) while we still use a deterministic MPC controller running in the only state of the SHS model. Finally, we find the conditions under which the designed deterministic controller is stable and feasible for the proposed SHS model of the CACC platoon. In the second part of this work, we propose to improve the performance of one of the most promising realtime powertrain control strategies, called Adaptive Equivalent Consumption Minimization Strategy (AECMS), using predicted driving conditions. In this part, two different real-time powertrain control strategies are proposed for HEVs. The first proposed method, including three different variations, introduces an adjustment factor for the cost of using electrical energy (equivalent factor) in AECMS. The factor is proportional to the predicted energy requirements of the vehicle, regenerative braking energy, and the cost of battery charging and discharging in a finite time window. Simulation results using detailed vehicle powertrain models illustrate that the proposed control strategies improve the performance of AECMS in terms of fuel economy by 4\%. Finally, we integrate the recent development in reinforcement learning to design a novel multi-level power distribution control. The proposed controller reacts in two levels, namely high-level and low-level control. The high-level control decision estimates the most probable driving profile matched to the current (and near future) state of the vehicle. Then, the corresponding low-level controller of the selected profile is utilized to distribute the requested power between Electric Motor (EM) and Internal Combustion Engine (ICE). This is important because there is no other prior work addressing this problem using a controller which can adjust its decision to the driving pattern. We proposed to use two reinforcement learning agents in two levels of abstraction. The first agent, selects the most optimal low-level controller (second agent) based on the overall pattern of the drive cycle in the near past and future, i.e., urban, highway and harsh. Then, the selected agent by the high-level controller (first agent) decides how to distribute the demanded power between the EM and ICE. We found that by carefully designing a training scheme, it is possible to effectively improve the performance of this data-driven controller. Simulation results show up to 6\% improvement in fuel economy compared to the AECMS
    • …
    corecore