5 research outputs found
Integration of Data Driven Technologies in Smart Grids for Resilient and Sustainable Smart Cities: A Comprehensive Review
A modern-day society demands resilient, reliable, and smart urban
infrastructure for effective and in telligent operations and deployment.
However, unexpected, high-impact, and low-probability events such as
earthquakes, tsunamis, tornadoes, and hurricanes make the design of such robust
infrastructure more complex. As a result of such events, a power system
infrastructure can be severely affected, leading to unprecedented events, such
as blackouts. Nevertheless, the integration of smart grids into the existing
framework of smart cities adds to their resilience. Therefore, designing a
resilient and reliable power system network is an inevitable requirement of
modern smart city infras tructure. With the deployment of the Internet of
Things (IoT), smart cities infrastructures have taken a transformational turn
towards introducing technologies that do not only provide ease and comfort to
the citizens but are also feasible in terms of sustainability and
dependability. This paper presents a holistic view of a resilient and
sustainable smart city architecture that utilizes IoT, big data analytics,
unmanned aerial vehicles, and smart grids through intelligent integration of
renew able energy resources. In addition, the impact of disasters on the power
system infrastructure is investigated and different types of optimization
techniques that can be used to sustain the power flow in the network during
disturbances are compared and analyzed. Furthermore, a comparative review
analysis of different data-driven machine learning techniques for sustainable
smart cities is performed along with the discussion on open research issues and
challenges
Recent Developments in Machine Learning for Energy Systems Reliability Management
peer reviewedThis paper reviews recent works applying machine learning techniques in the context of energy systems reliability assessment and control. We showcase both the progress achieved to date as well as the important future directions for further research, while providing an adequate background in the fields of reliability management and of machine learning. The objective is to foster the synergy between these two fields and speed up the practical adoption of machine learning techniques for energy systems reliability management. We focus on bulk electric power systems and use them as an example, but we argue that the methods, tools, {\it etc.} can be extended to other similar systems, such as distribution systems, micro-grids, and multi-energy systems
Model-based and Model-free Approaches for Power System Security Assessment
Continuous security assessment of a power system is necessary to insure a reliable, stable, and continuous supply of electrical power to customers. To this end, this dissertation identifies and explores some of the various challenges encountered in the field of power system security assessment. Accordingly, several model-based and/or model-free approaches were developed to overcome these challenges.
First, a voltage stability index, named TAVSI, is proposed. This index has three important features: TAVSI applies to general load models including ZIP, exponential, and induction motor loads; TAVSI can be used for both measurement-based and model-based voltage stability assessment; and finally, TAVSI is calculated based on normalized sensitivities which enables identification of weak buses and the definition of a global instability threshold. TAVSI was tested on both the IEEE 14-bus and the 181-bus WECC systems. Results show that TAVSI gives a reliable assessment of system stability.
Second, a data-driven and model-based hybrid reinforcement learning approach is proposed for training a control agent to re-dispatch generators’ output power in order to relieve stressed branches. For large power systems, the agent’s action space is highly dimensioned which challenges the successful training of data-driven agents. Therefore, we propose a hybrid approach where model-based actions are utilized to help the agent learn an optimal control policy. The proposed approach was tested and compared to the generic data-driven DDPG-based approach on the IEEE 118-bus system and a larger 2749-bus real-world system. Results show that the hybrid approach performs well for large power systems and that it is superior to the DDPG-based approach.
Finally, a Convolutional Neural Network (CNN) based approach is proposed as a faster alternative to the classical AC power flow-based contingency screening. The proposed approach is investigated on both the IEEE 118-bus system and the Texas 2000-bus synthetic system. For such large systems, the implementation of the proposed approach came with several challenges, such as computational burden, learning from imbalanced datasets, and performance evaluation of trained models. Accordingly, this work contributes a set of novel techniques and best practices that enables both efficient and successful implementation of CNN-based multi-contingency classifiers for large power systems
Anticipating contingengies in power grids using fast neural net screening
International audienceWe address the problem of maintaining high voltage power transmission networks in security at all time. This requires that power flowing through all lines remain below a certain nominal thermal limit above which lines might melt, break or cause other damages. Current practices include enforcing the deterministic ``N-1'' reliability criterion, namely anticipating exceeding of thermal limit for any eventual single line disconnection (whatever its cause may be) by running a slow, but accurate, physical grid simulator. New conceptual frameworks are calling for a probabilistic risk based security criterion and are in need of new methods to assess the risk. To tackle this difficult assessment, we address in this paper the problem of rapidly ranking higher order contingencies including all pairs of line disconnections, to better prioritize simulations. We present a novel method based on neural networks, which ranks ``N-1'' and ``N-2'' contingencies in decreasing order of presumed severity. We demonstrate on a classical benchmark problem that the residual risk of contingencies decreases dramatically compared to considering solely all ``N-1'' cases, at no additional computational cost. We evaluate that our method scales up to power grids of the size of the French high voltage power grid (over 1000 power lines)