2,977 research outputs found

    Power management optimisation for hybrid electric systems using reinforcement learning and adaptive dynamic programming

    Get PDF
    This paper presents an online learning scheme based on reinforcement learning and adaptive dynamic programming for the power management of hybrid electric systems. Current methods for power management are conservative and unable to fully account for variations in the system due to changes in the health and operational conditions. These conservative schemes result in less efficient use of available power sources, increasing the overall system costs and heightening the risk of failure due to the variations. The proposed scheme is able to compensate for modelling uncertainties and the gradual system variations by adapting its performance function using the observed system measurements as reinforcement signals. The reinforcement signals are nonlinear and consequently neural networks are employed in the implementation of the scheme. Simulation results for the power management of an autonomous hybrid system show improved system performance using the proposed scheme as compared with a conventional offline dynamic programming approach

    Resilience-driven planning and operation of networked microgrids featuring decentralisation and flexibility

    Get PDF
    High-impact and low-probability extreme events including both man-made events and natural weather events can cause severe damage to power systems. These events are typically rare but featured in long duration and large scale. Many research efforts have been conducted on the resilience enhancement of modern power systems. In recent years, microgrids (MGs) with distributed energy resources (DERs) including both conventional generation resources and renewable energy sources provide a viable solution for the resilience enhancement of such multi-energy systems during extreme events. More specifically, several islanded MGs after extreme events can be connected with each other as a cluster, which has the advantage of significantly reducing load shedding through energy sharing among them. On the other hand, mobile power sources (MPSs) such as mobile energy storage systems (MESSs), electric vehicles (EVs), and mobile emergency generators (MEGs) have been gradually deployed in current energy systems for resilience enhancement due to their significant advantages on mobility and flexibility. Given such a context, a literature review on resilience-driven planning and operation problems featuring MGs is presented in detail, while research limitations are summarised briefly. Then, this thesis investigates how to develop appropriate planning and operation models for the resilience enhancement of networked MGs via different types of DERs (e.g., MGs, ESSs, EVs, MESSs, etc.). This research is conducted in the following application scenarios: 1. This thesis proposes novel operation strategies for hybrid AC/DC MGs and networked MGs towards resilience enhancement. Three modelling approaches including centralised control, hierarchical control, and distributed control have been applied to formulate the proposed operation problems. A detailed non-linear AC OPF algorithm is employed to model each MG capturing all the network and technical constraints relating to stability properties (e.g., voltage limits, active and reactive power flow limits, and power losses), while uncertainties associated with renewable energy sources and load profiles are incorporated into the proposed models via stochastic programming. Impacts of limited generation resources, load distinction intro critical and non-critical, and severe contingencies (e.g., multiple line outages) are appropriately captured to mimic a realistic scenario. 2. This thesis introduces MPSs (e.g., EVs and MESSs) into the suggested networked MGs against the severe contingencies caused by extreme events. Specifically, time-coupled routing and scheduling characteristics of MPSs inside each MG are modelled to reduce load shedding when large damage is caused to each MG during extreme events. Both transportation networks and power networks are considered in the proposed models, while transporting time of MPSs between different transportation nodes is also appropriately captured. 3. This thesis focuses on developing realistic planning models for the optimal sizing problem of networked MGs capturing a trade-off between resilience and cost, while both internal uncertainties and external contingencies are considered in the suggested three-level planning model. Additionally, a resilience-driven planning model is developed to solve the coupled optimal sizing and pre-positioning problem of MESSs in the context of decentralised networked MGs. Internal uncertainties are captured in the model via stochastic programming, while external contingencies are included through the three-level structure. 4. This thesis investigates the application of artificial intelligence techniques to power system operations. Specifically, a model-free multi-agent reinforcement learning (MARL) approach is proposed for the coordinated routing and scheduling problem of multiple MESSs towards resilience enhancement. The parameterized double deep Q-network method (P-DDQN) is employed to capture a hybrid policy including both discrete and continuous actions. A coupled power-transportation network featuring a linearised AC OPF algorithm is realised as the environment, while uncertainties associated with renewable energy sources, load profiles, line outages, and traffic volumes are incorporated into the proposed data-driven approach through the learning procedure.Open Acces

    OPTIMAL AND ADAPTIVE CONTROL FRAMEWORKS USING REINFORCEMENT LEARNING FOR TIME-VARYING DYNAMICAL SYSTEMS

    Get PDF
    Performance of complex propulsion and power systems are affected by a vast number of varying factors such as gradual system degradation, engine build differences and changing operating conditions. Owing to these variations, prior characterisation of the system performance metrics such as fuel efficiency function and constraints is infeasible. Existing model-based control approaches are therefore inherently conservative at the expense of the system performance as they are unable to fully characterise the system variations. The system performance characteristics affected by these variations are typically used for health monitoring and maintenance management, but the opportunities to complement the control design have received little attention. It is therefore increasingly important to use the information about the system performance characteristics in the control system design whilst considering the reliability of its implementation. This thesis therefore considers the design of direct adaptive frameworks that exploit emerging diagnostic technologies and enable the direct use of complex performance metrics to deliver self-optimising control systems in the face of disturbances and system variations. These frameworks are termed condition-based control techniques and this thesis extends reinforcement learning (RL) theory which has achieved significant successes in the area of computing and artificial intelligence to the new frameworks and applications. Consequently, an online RL framework was developed for the class of complex propulsion and power systems that make use of the performance metrics to directly learn and adapt the system control. The RL adaptations were further integrated into existing baseline controller structures whilst maintaining the safety and reliability of the underlying system. Furthermore, two online optimal RL tracking control frameworks were developed for time-varying dynamical systems that use a new augmented formulation with integral control. The proposed online RL frameworks advance the state-of-the-art for use in tracking control applications by not making restrictive assumptions on reference model dynamics or use of discounted tracking costs, and guaranteeing zero steady-state tracking error. Finally, an online power management optimisation scheme for hybrid systems that uses a condition-based RL adaptation was developed. The proposed power management optimisation scheme is able to learn and compensate for the gradual system variations and learn online the optimal power management strategy between the hybrid power source given future load predictions. This way, improved system performance is delivered and providing a through-life adaptation strategy

    Near-optimal energy management for plug-in hybrid fuel cell and battery propulsion using deep reinforcement learning

    Get PDF
    Plug-in hybrid fuel cell and battery propulsion systems appear promising for decarbonising transportation applications such as road vehicles and coastal ships. However, it is challenging to develop optimal or near-optimal energy management for these systems without exact knowledge of future load profiles. Although efforts have been made to develop strategies in a stochastic environment with discrete state space using Q-learning and Double Q-learning, such tabular reinforcement learning agents’ effectiveness is limited due to the state space resolution. This article aims to develop an improved energy management system using deep reinforcement learning to achieve enhanced cost-saving by extending discrete state parameters to be continuous. The improved energy management system is based upon the Double Deep Q-Network. Real-world collected stochastic load profiles are applied to train the Double Deep Q-Network for a coastal ferry. The results suggest that the Double Deep Q-Network acquired energy management strategy has achieved a further 5.5% cost reduction with a 93.8% decrease in training time, compared to that produced by the Double Q-learning agent in discrete state space without function approximations. In addition, this article also proposes an adaptive deep reinforcement learning energy management scheme for practical hybrid-electric propulsion systems operating in changing environments
    • 

    corecore