6 research outputs found

    Data-driven coordinated voltage control method of distribution networks with high DG penetration

    Get PDF
    The highly penetrated distributed generators (DGs) aggravate the voltage violations in active distribution networks (ADNs). The coordination of various regulation devices such as on-load tap changers (OLTCs) and DG inverters can effectively address the voltage issues. Considering the problems of inaccurate network parameters and rapid DG fluctuation in practical operation, multi-source data can be utilized to establish the data-driven control model. In this paper, a data-driven coordinated voltage control method with the coordination of OLTC and DG inverters on multiple time-scales is proposed without relying on the accurate physical model. First, based on the multi-source data, a data-driven voltage control model is established. Multiple regulation devices such as OLTC and DG are coordinated on multiple time-scales to maintain voltages within the desired range. Then, a critical measurement selection method is proposed to guarantee the voltage control performance under the partial measurements in practical ADNs. Finally, the proposed method is validated on the modified IEEE 33-node and IEEE 123-node test cases. Case studies illustrate the effectiveness of the proposed method, as well as the adaptability to DG uncertainties

    Deep Reinforcement Learning for the Optimization of Building Energy Control and Management

    Get PDF
    Most of the current game-theoretic demand-side management methods focus primarily on the scheduling of home appliances, and the related numerical experiments are analyzed under various scenarios to achieve the corresponding Nash-equilibrium (NE) and optimal results. However, not much work is conducted for academic or commercial buildings. The methods for optimizing academic-buildings are distinct from the optimal methods for home appliances. In my study, we address a novel methodology to control the operation of heating, ventilation, and air conditioning system (HVAC). We assume that each building in our campus is equipped with smart meter and communication system which is envisioned in the future smart grid. For academic and commercial buildings, HVAC systems consume considerable electrical energy and impact the personnels in the buildings which is interpreted as monetary value in this article. Therefore, we define social cost as the combination of energy expense and cost of human working productivity reduction. We implement game theory and formulate a controlling and scheduling game for HVAC system, where the players are the building managers and their strategies are the indoor temperature settings for the corresponding building. We use the University of Denver campus power system as the demonstration smart grid and it is assumed that the utility company can adopt the real-time pricing mechanism, which is demonstrated in this paper, to reflect the energy usage and power system condition in real time. For general scenarios, the global optimal results in terms of minimizing social costs can be reached at the Nash equilibrium of the formulated objective function. The proposed distributed HVAC controlling system requires each manager set the indoor temperature to the best response strategy to optimize their overall management. The building managers will be willing to participate in the proposed game to save energy cost while maintaining the indoor in comfortable zone. With the development of Artificial Intelligence and computer technologies, reinforcement learning (RL) can be implemented in multiple realistic scenarios and help people to solve thousands of real-world problems. Reinforcement Learning, which is considered as the art of future AI, builds the bridge between agents and environments through Markov Decision Chain or Neural Network and has seldom been used in power system. The art of RL is that once the simulator for a specific environment is built, the algorithm can keep learning from the environment. Therefore, RL is capable of dealing with constantly changing simulator inputs such as power demand, the condition of power system and outdoor temperature, etc. Compared with the existing distribution power system planning mechanisms and the related game theoretical methodologies, our proposed algorithm can plan and optimize the hourly energy usage, and have the ability to corporate with even shorter time window if needed. The combination of deep neural network and reinforcement learning rockets up the research of deep reinforcement learning, and this manuscript contributes to the research of power energy management by developing and implementing the deep reinforcement learning to control the HVAC systems in distribution power system. Simulation results prove that the proposed methodology can set the indoor temperature with respect to real-time pricing and the number of inside occupants, maintain indoor comfort, reduce individual building energy cost and the overall campus electricity charges. Compared with the traditional game theoretical methodology, the RL based gaming methodology can achieve the optiaml resutls much more quicker

    A Hierarchical VLSM-Based Demand Response Strategy for Coordinative Voltage Control Between Transmission and Distribution Systems

    No full text
    corecore