2 research outputs found

    Renewable Energy Integration in Distribution System with Artificial Intelligence

    Get PDF
    With the increasing attention of renewable energy development in distribution power system, artificial intelligence (AI) can play an indispensiable role. In this thesis, a series of artificial intelligence based methods are studied and implemented to further enhance the performance of power system operation and control. Due to the large volume of heterogeneous data provided by both the customer and the grid side, a big data visualization platform is built to feature out the hidden useful knowledge for smart grid (SG) operation, control and situation awareness. An open source cluster calculation framework with Apache Spark is used to discover big data hidden information. The data is transmitted with an Open System Interconnection (OSI) model to the data visualization platform with a high-speed communication architecture. Google Earth and Global Geographic Information System (GIS) are used to design the visualization platform and realize the results. Based on the data visualization platform above, the external manifestation of the data is studied. In the following work, I try to understand the internal hidden information of the data. A short-term load forecasting approach is designed based on support vector regression (SVR) to provide a higher accuracy load forecasting for the network reconfiguration. The nonconvexity of three-phase balanced optimal power flow is relaxed to an optimal power flow (OPF) problem with the second-order cone program (SOCP). The alternating direction method of multipliers (ADMM) is used to compute the optimal power flow in distributed manner. Considering the reality of distribution systems, a three-phase unbalanced distribtion system is built, which consists of the hourly operation scheduling at substation level and the minutes power flow operation at feeder level. The operaion cost of system with renewable generation is minimized at substation level. The stochastoc distribution model of renewable generation is simulated with a chance constraint, and the derived deterministic form is modeled with Gaussian Mixture Model (GMM) with genetic algorithm-based expectationmaximization (GAEM). The system cost is further reduced with OPF in real-time (RT) scheduling. The semidefinite programming (SDP) is used to relax the nonconvexity of the three-phase unbalanced distribution system into a convex problem, which helps to achieve the global optimal result. In the parallel manner, the ADMM is realizing getting the results in a short time. Clouds have a big impact on solar energy forecasting. Firstly, a convolutional neural network based mathod is used to estimate the solar irradiance, Secondly, the regression results are collected to predict the renewable generation. After that, a novel approach is proposed to capture the Global horizontal irradiance (GHI) conveniently and accurately. Considering the nonstationary property of the GHI on cloudy days, the GHI capturing is cast as an image regression problem. In traditional approaches, the image regression problem is treated as two parts, feature extraction and regression, which are optimized separately and no interconnections. Considering the nonlinear regression capability, a convolutional neural network (CNN) based image regression approach is proposed to provide an End-to- End solution for the cloudy day GHI capturing problem in this paper. For data cleaning, the Gaussian mixture model with Bayesian inference is employed to detect and eliminate the anomaly data in a nonparametric manner. The purified data are used as input data for the proposed image regression approach. The numerical results demonstrate the feasibility and effectiveness of the proposed approach

    Deep Reinforcement Learning for the Optimization of Building Energy Control and Management

    Get PDF
    Most of the current game-theoretic demand-side management methods focus primarily on the scheduling of home appliances, and the related numerical experiments are analyzed under various scenarios to achieve the corresponding Nash-equilibrium (NE) and optimal results. However, not much work is conducted for academic or commercial buildings. The methods for optimizing academic-buildings are distinct from the optimal methods for home appliances. In my study, we address a novel methodology to control the operation of heating, ventilation, and air conditioning system (HVAC). We assume that each building in our campus is equipped with smart meter and communication system which is envisioned in the future smart grid. For academic and commercial buildings, HVAC systems consume considerable electrical energy and impact the personnels in the buildings which is interpreted as monetary value in this article. Therefore, we define social cost as the combination of energy expense and cost of human working productivity reduction. We implement game theory and formulate a controlling and scheduling game for HVAC system, where the players are the building managers and their strategies are the indoor temperature settings for the corresponding building. We use the University of Denver campus power system as the demonstration smart grid and it is assumed that the utility company can adopt the real-time pricing mechanism, which is demonstrated in this paper, to reflect the energy usage and power system condition in real time. For general scenarios, the global optimal results in terms of minimizing social costs can be reached at the Nash equilibrium of the formulated objective function. The proposed distributed HVAC controlling system requires each manager set the indoor temperature to the best response strategy to optimize their overall management. The building managers will be willing to participate in the proposed game to save energy cost while maintaining the indoor in comfortable zone. With the development of Artificial Intelligence and computer technologies, reinforcement learning (RL) can be implemented in multiple realistic scenarios and help people to solve thousands of real-world problems. Reinforcement Learning, which is considered as the art of future AI, builds the bridge between agents and environments through Markov Decision Chain or Neural Network and has seldom been used in power system. The art of RL is that once the simulator for a specific environment is built, the algorithm can keep learning from the environment. Therefore, RL is capable of dealing with constantly changing simulator inputs such as power demand, the condition of power system and outdoor temperature, etc. Compared with the existing distribution power system planning mechanisms and the related game theoretical methodologies, our proposed algorithm can plan and optimize the hourly energy usage, and have the ability to corporate with even shorter time window if needed. The combination of deep neural network and reinforcement learning rockets up the research of deep reinforcement learning, and this manuscript contributes to the research of power energy management by developing and implementing the deep reinforcement learning to control the HVAC systems in distribution power system. Simulation results prove that the proposed methodology can set the indoor temperature with respect to real-time pricing and the number of inside occupants, maintain indoor comfort, reduce individual building energy cost and the overall campus electricity charges. Compared with the traditional game theoretical methodology, the RL based gaming methodology can achieve the optiaml resutls much more quicker
    corecore