6,713 research outputs found

    Management of Distributed Energy Storage Systems for Provisioning of Power Network Services

    Full text link
    Because of environmentally friendly reasons and advanced technological development, a significant number of renewable energy sources (RESs) have been integrated into existing power networks. The increase in penetration and the uneven allocation of the RESs and load demands can lead to power quality issues and system instability in the power networks. Moreover, high penetration of the RESs can also cause low inertia due to a lack of rotational machines, leading to frequency instability. Consequently, the resilience, stability, and power quality of the power networks become exacerbated. This thesis proposes and develops new strategies for energy storage (ES) systems distributed in power networks for compensating for unbalanced active powers and supply-demand mismatches and improving power quality while taking the constraints of the ES into consideration. The thesis is mainly divided into two parts. In the first part, unbalanced active powers and supply-demand mismatch, caused by uneven allocation and distribution of rooftop PV units and load demands, are compensated by employing the distributed ES systems using novel frameworks based on distributed control systems and deep reinforcement learning approaches. There have been limited studies using distributed battery ES systems to mitigate the unbalanced active powers in three-phase four-wire and grounded power networks. Distributed control strategies are proposed to compensate for the unbalanced conditions. To group households in the same phase into the same cluster, algorithms based on feature states and labelled phase data are applied. Within each cluster, distributed dynamic active power balancing strategies are developed to control phase active powers to be close to the reference average phase power. Thus, phase active powers become balanced. To alleviate the supply-demand mismatch caused by high PV generation, a distributed active power control system is developed. The strategy consists of supply-demand mismatch and battery SoC balancing. Control parameters are designed by considering Hurwitz matrices and Lyapunov theory. The distributed ES systems can minimise the total mismatch of power generation and consumption so that reverse power flowing back to the main is decreased. Thus, voltage rise and voltage fluctuation are reduced. Furthermore, as a model-free approach, new frameworks based on Markov decision processes and Markov games are developed to compensate for unbalanced active powers. The frameworks require only proper design of states, action and reward functions, training, and testing with real data of PV generations and load demands. Dynamic models and control parameter designs are no longer required. The developed frameworks are then solved using the DDPG and MADDPG algorithms. In the second part, the distributed ES systems are employed to improve frequency, inertia, voltage, and active power allocation in both islanded AC and DC microgrids by novel decentralized control strategies. In an islanded DC datacentre microgrid, a novel decentralized control of heterogeneous ES systems is proposed. High- and low frequency components of datacentre loads are shared by ultracapacitors and batteries using virtual capacitive and virtual resistance droop controllers, respectively. A decentralized SoC balancing control is proposed to balance battery SoCs to a common value. The stability model ensures the ES devices operate within predefined limits. In an isolated AC microgrid, decentralized frequency control of distributed battery ES systems is proposed. The strategy includes adaptive frequency droop control based on current battery SoCs, virtual inertia control to improve frequency nadir and frequency restoration control to restore system frequency to its nominal value without being dependent on communication infrastructure. A small-signal model of the proposed strategy is developed for calculating control parameters. The proposed strategies in this thesis are verified using MATLAB/Simulink with Reinforcement Learning and Deep Learning Toolboxes and RTDS Technologies' real-time digital simulator with accurate power networks, switching levels of power electronic converters, and a nonlinear battery model

    Multi-agent reinforcement learning for the coordination of residential energy flexibility

    Get PDF
    This thesis investigates whether residential energy flexibility can be coordinated without sharing personal data at scale to achieve a positive impact on energy users and the grid. To tackle climate change, energy uses are being electrified at pace, just as electricity is increasingly provided by non-dispatchable renewable energy sources. These shifts increase the requirements for demand-side flexibility. Despite the potential of residential energy to provide such flexibility, it has remained largely untapped due to cost, social acceptance, and technical barriers. This thesis investigates the use of multi-agent reinforcement learning to overcome these challenges. This thesis presents a novel testing environment, which models electric vehicles, space heating, and flexible household loads in a distribution network. Additionally, a generative adversarial network-based data generator is developed to obtain realistic training and testing data. Experiments conducted in this environment showed that standard independent learners fail to coordinate in the partially observable stochastic environment. To address this, additional coordination mechanisms are proposed for agents to practise coordination in a centralised simulated rehearsal, ahead of fully decentralised implementation. Two such coordination mechanisms are proposed: optimisation-informed independent learning, and a centralised but factored critic network. In the former, agents lean from omniscient convex optimisation results ahead of fully decentralised coordination. This enables cooperation at scale where standard independent learners under partial observability could not be coordinated. In the latter, agents employ a deep neural factorisation network to learn to assess their impact on global rewards. This approach delivers comparable performance for four agents and more, with a 34-fold speed improvement for 30 agents and only first-order computational time growth. Finally, the impacts of implementing implicit coordination using these multi- agent reinforcement learning methodologies are modelled. It is observed that even without explicit grid constraint management, cooperating energy users reduce the likelihood of voltage deviations. The cooperative management of voltage constraints could be further promoted by the MARL policies, whereby their likelihood could be reduced by 43.08% relative to an uncoordinated baseline, albeit with trade-offs in other costs. However, while this thesis demonstrates the technical feasibility of MARL-based cooperation, further market mechanisms are required to reward all participants for their cooperation

    Policy-based power consumption management in smart energy community using single agent and multi agent Q learning algorithms

    Get PDF
    Power consumption in residential sector has increased due to growing population, economic growth, invention of many electrical appliances and therefore is becoming a growing concern in the power industry. Managing power consumption in residential sector without sacrificing user comfort has become one of the main research areas recently. The complexity of the power system keeps growing due to the penetration of alternative sources of electric energy such as solar plant, Hydro, Biomass, Geothermal and wind farm to meet the growing demand for electricity. To overcome the challenges due to complexity, the power grid needs to be intelligent in all aspects. As the grid gets smarter and smarter, considerable efforts are being undertaken to make the houses and businesses smarter in consuming the electrical energy to minimize and level the electricity demand which is also known as Demand Side Management (DSM). It also necessitates that the conventional way of modelling, control and energy management in all sectors needs to be enhanced or replaced by intelligent information processing techniques. In our research work, it has been done in several stages. (Purpose of Study and Results) We proposed a policy-based framework which allows intelligent and flexible energy management of home appliances in a smart home which is complex and dynamic in ways that saves energy automatically. We considered the challenges in formalizing the behaviour of the appliances using their states and managing the energy consumption using policies. Policies are rules which are created and edited by a house agent to deal with situations or power problems that are likely to occur. Each time the power problem arises the house agent will refer to policy and one or a set of rules will be executed to overcome that situation. Our policy-based smart home can manage energy efficiently and can significantly participate in reducing peak energy demand (thereby may reduce carbon emission). Our proposed policy-based framework achieves peak shaving so that power consumption adapts to available power, while ensuring the comfort level of the inhabitants and taking device characteristics in to account. Our simulation results on MATLAB indicate that the proposed Policy driven homes can effectively contribute to Demand side power management by decreasing the peak hour usage of the appliances and can efficiently manage energy in a smart home in a user-friendly way. We propounded and developed peak demand management algorithms for a Smart Energy Community using different types of coordination mechanisms for coordination of multiple house agents working in the same environment. These algorithms use centralized model, decentralized model, hybrid model and Pareto resource allocation model for resource allocation. We modelled user comfort for the appliance based on user preference, the power reduction capability and the important activities that run around the house associated with that appliance. Moreover, we compared these algorithms with respect to their peak reduction capability, overall comfort of the community, simplicity of the algorithm and community involvement and finally able to find the best performing algorithm among them. Our simulation results show that the proposed coordination algorithms can effectively reduce peak demand while maintaining user comfort. With the help of our proposed algorithms, the demand for electricity of a smart community can be managed intelligently and sustainably. This work is not only aiming for peak reduction management it aims for achieving it while keeping the comfort level of the inhabitants is minimum. It can learn user’s behaviour and establish the set of optimal rules dynamically. If the available power to a house is kept at a certain level the house agent will learn to use this notional power to operate all the appliances according to the requirements and comfort level of the household. This way the consumers are forced to use the power below the set level which can result in the over-all power consumption be maintained at a certain rate or level which means sustainability is possible or depletion of natural resources for electricity can be reduced. Temporal interactions of Energy Demand by local users and renewable energy sources can also be done more efficiently by having a set of new policy rules to switch between the utility and the renewable source of energy but it is beyond the scope of this thesis. We applied Q learning techniques to a home energy management agent where the agent learns to find the optimal sequence of turning off appliances so that the appliances with higher priority will not be switched off during peak demand period or power consumption management. The policy-based home energy management determines the optimal policy at every instant dynamically by learning through the interaction with the environment using one of the reinforcement learning approaches called Q-learning. The Q-learning home power consumption problem formulation consisting of state space, actions and reward function is presented. The implications of these simulation results are that the proposed Q- learning based power consumption management is very effective and enables the users to have minimum discomfort during participation in peak demand management or at the time when power consumption management is essential when the available power is rationale. This work is extended to a group of 10 houses and three multi agent Q- learning algorithms are proposed and developed for improving the individual and community comfort while at the same time keeping the power consumption below the available power level or electricity price below the set price. The proposed algorithms are weighted strategy sharing algorithm, concurrent Q learning algorithm and cooperative distributive learning algorithm. These proposed algorithms are coded and tested for managing power consumption of a group of 10 houses and the performance of all three algorithms with respect to power management and community comfort is studied and compared. Actual power consumption of a community and modified power consumption curves using Weighted Strategy Sharing algorithm, Concurrent learning and Distributive Q Learning and user comfort results are presented, and the results are analysed in this thesis

    MAHTM: A Multi-Agent Framework for Hierarchical Transactive Microgrids

    Full text link
    Integrating variable renewable energy into the grid has posed challenges to system operators in achieving optimal trade-offs among energy availability, cost affordability, and pollution controllability. This paper proposes a multi-agent reinforcement learning framework for managing energy transactions in microgrids. The framework addresses the challenges above: it seeks to optimize the usage of available resources by minimizing the carbon footprint while benefiting all stakeholders. The proposed architecture consists of three layers of agents, each pursuing different objectives. The first layer, comprised of prosumers and consumers, minimizes the total energy cost. The other two layers control the energy price to decrease the carbon impact while balancing the consumption and production of both renewable and conventional energy. This framework also takes into account fluctuations in energy demand and supply.Comment: ICLR 2023 Workshop: Tackling Climate Change with Machine Learnin

    Scaling energy management in buildings with artificial intelligence

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Reinforcement learning in local energy markets

    Get PDF
    Local energy markets (LEMs) are well suited to address the challenges of the European energy transition movement. They incite investments in renewable energy sources (RES), can improve the integration of RES into the energy system, and empower local communities. However, as electricity is a low involvement good, residential households have neither the expertise nor do they want to put in the time and effort to trade themselves on their own on short-term LEMs. Thus, machine learning algorithms are proposed to take over the bidding for households under realistic market information. We simulate a LEM on a 15 min merit-order market mechanism and deploy reinforcement learning as strategic learning for the agents. In a multi-agent simulation of 100 households including PV, micro-cogeneration, and demand shifting appliances, we show how participants in a LEM can achieve a self-sufficiency of up to 30% with trading and 41,4% with trading and demand response (DR) through an installation of only 5kWp PV panels in 45% of the households under affordable energy prices. A sensitivity analysis shows how the results differ according to the share of renewable generation and degree of demand flexibility
    • …
    corecore