6 research outputs found

    Distributionally Robust Optimal Power Flow with Strengthened Ambiguity Sets

    Full text link
    Uncertainties that result from renewable generation and load consumption can complicate the optimal power flow problem. These uncertainties normally influence the physical constraints stochastically and require special methodologies to solve. Hence, a variety of stochastic optimal power flow formulations using chance constraints have been proposed to reduce the risk of physical constraint violations and ensure a reliable dispatch solution under uncertainty. The true uncertainty distribution is required to exactly reformulate the problem, but it is generally difficult to obtain. Conventional approaches include randomized techniques (such as scenario-based methods) that provide a priori guarantees of the probability of constraint violations but generally require many scenarios and produce high-cost solutions. Another approach is to use an analytical reformulation, which assumes that the uncertainties follow specific distributions such as Gaussian distributions. However, if the actual uncertainty distributions do not follow the assumed distributions, the results often suffer from case-dependent reliability. Recently, researchers have also explored distributionally robust optimization, which requires probabilistic constraints to be satisfied at chosen probability levels for any uncertainty distributions within a pre-defined ambiguity set. The set is constructed based on the statistical information that is extracted from historical data. Existing literature applying distributionally robust optimization to the optimal power flow problem indicates that the approach has promising performance with low objective costs as well as high reliability compared with the randomized techniques and analytical reformulation. In this dissertation, we aim to analyze the conventional approaches and further improve the current distributionally robust methods. In Chapter II, we derive the analytical reformulation of a multi-period optimal power flow problem with uncertain renewable generation and load-based reserve. It is assumed that the capacities of the load-based reserves are affected by outdoor temperatures through non-linear relationships. Case studies compare the analytical reformulation with the scenario-based method and demonstrate that the scenario-based method generates overly-conservative results and the analytical reformulation results in lower cost solutions but it suffers from reliability issues. In Chapters III, IV, and V, we develop new methodologies in distributionally robust optimization by strengthening the moment-based ambiguity set by including a combination of the moment, support, and structural property information. Specifically, we consider unimodality and log-concavity as most practical uncertainties exhibit these properties. The strengthened ambiguity sets are used to develop tractable reformulations, approximations, and efficient algorithms for the optimal power flow problem. Case studies indicate that these strengthened ambiguity sets reduce the conservativeness of the solutions and result in sufficiently reliable solutions. In Chapter VI, we compare the performance of the conventional approaches and distributionally robust approaches including moment and unimodality information on large-scale systems with high uncertainty dimensions. Through case studies, we evaluate each approach's performance by exploring its objective cost, computational scalability, and reliability. Simulation results suggest that distributionally robust optimal power flow including unimodality information produces solutions with better trade-offs between objective cost and reliability as compared to the conventional approaches or the distributionally robust approaches that do not include unimodality assumptions. However, considering unimodality also leads to longer computational times.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/150051/1/libowen_1.pd

    Distributionally Robust Optimal Power Flow with Contextual Information

    Get PDF
    Adrián Esteban-Pérez, Juan M. Morales, Distributionally Robust Optimal Power Flow with Contextual Information, European Journal of Operational Research (2022), doi: https://doi.org/10.1016/j.ejor.2022.10.024In this paper, we develop a distributionally robust chance-constrained formulation of the Optimal Power Flow problem (OPF) whereby the system operator can leverage contextual information. For this purpose, we exploit an ambiguity set based on probability trimmings and optimal transport through which the dispatch solution is protected against the incomplete knowledge of the relationship between the OPF uncertainties and the context that is conveyed by a sample of their joint probability distribution. We provide a tractable reformulation of the proposed distributionally robust chance-constrained OPF problem under the popular conditional-value-at-risk approximation. By way of numerical experiments run on a modified IEEE-118 bus network with wind uncertainty, we show how the power system can substantially benefit from taking into account the well-known statistical dependence between the point forecast of wind power outputs and its associated prediction error. Furthermore, the experiments conducted also reveal that the distributional robustness conferred on the OPF solution by our probability-trimmings-based approach is superior to that bestowed by alternative approaches in terms of expected cost and system reliability.European Research Council (755705); Ministerio de Ciencia e Innovación del Gobierno de España (PID2020- 115460GB-I00/AEI/10.13039/501100011033); Junta de Andalucía y fondos FEDER (P20 00153); Universidad de Málag

    AI alignment and generalization in deep learning

    Full text link
    This thesis covers a number of works in deep learning aimed at understanding and improving generalization abilities of deep neural networks (DNNs). DNNs achieve unrivaled performance in a growing range of tasks and domains, yet their behavior during learning and deployment remains poorly understood. They can also be surprisingly brittle: in-distribution generalization can be a poor predictor of behavior or performance under distributional shifts, which typically cannot be avoided in practice. While these limitations are not unique to DNNs -- and indeed are likely to be challenges facing any AI systems of sufficient complexity -- the prevalence and power of DNNs makes them particularly worthy of study. I frame these challenges within the broader context of "AI Alignment": a nascent field focused on ensuring that AI systems behave in accordance with their user's intentions. While making AI systems more intelligent or capable can help make them more aligned, it is neither necessary nor sufficient for alignment. However, being able to align state-of-the-art AI systems (e.g. DNNs) is of great social importance in order to avoid undesirable and unsafe behavior from advanced AI systems. Without progress in AI Alignment, advanced AI systems might pursue objectives at odds with human survival, posing an existential risk (``x-risk'') to humanity. A core tenet of this thesis is that the achieving high performance on machine learning benchmarks if often a good indicator of AI systems' capabilities, but not their alignment. This is because AI systems often achieve high performance in unexpected ways that reveal the limitations of our performance metrics, and more generally, our techniques for specifying our intentions. Learning about human intentions using DNNs shows some promise, but DNNs are still prone to learning to solve tasks using concepts of "features" very different from those which are salient to humans. Indeed, this is a major source of their poor generalization on out-of-distribution data. By better understanding the successes and failures of DNN generalization and current methods of specifying our intentions, we aim to make progress towards deep-learning based AI systems that are able to understand users' intentions and act accordingly.Cette thèse discute quelques travaux en apprentissage profond visant à comprendre et à améliorer les capacités de généralisation des réseaux de neurones profonds (DNN). Les DNNs atteignent des performances inégalées dans un éventail croissant de tâches et de domaines, mais leur comportement pendant l'apprentissage et le déploiement reste mal compris. Ils peuvent également être étonnamment fragiles: la généralisation dans la distribution peut être un mauvais prédicteur du comportement ou de la performance lors de changements de distribution, ce qui ne peut généralement pas être évité dans la pratique. Bien que ces limitations ne soient pas propres aux DNN - et sont en effet susceptibles de constituer des défis pour tout système d'IA suffisamment complexe - la prévalence et la puissance des DNN les rendent particulièrement dignes d'étude. J'encadre ces défis dans le contexte plus large de «l'alignement de l'IA»: un domaine naissant axé sur la garantie que les systèmes d'IA se comportent conformément aux intentions de leurs utilisateurs. Bien que rendre les systèmes d'IA plus intelligents ou capables puisse aider à les rendre plus alignés, cela n'est ni nécessaire ni suffisant pour l'alignement. Cependant, être capable d'aligner les systèmes d'IA de pointe (par exemple les DNN) est d'une grande importance sociale afin d'éviter les comportements indésirables et dangereux des systèmes d'IA avancés. Sans progrès dans l'alignement de l'IA, les systèmes d'IA avancés pourraient poursuivre des objectifs contraires à la survie humaine, posant un risque existentiel («x-risque») pour l'humanité. L'un des principes fondamentaux de cette thèse est que l'obtention de hautes performances sur les repères d'apprentissage automatique est souvent un bon indicateur des capacités des systèmes d'IA, mais pas de leur alignement. En effet, les systèmes d'IA atteignent souvent des performances élevées de manière inattendue, ce qui révèle les limites de nos mesures de performance et, plus généralement, de nos techniques pour spécifier nos intentions. L'apprentissage des intentions humaines à l'aide des DNN est quelque peu prometteur, mais les DNN sont toujours enclins à apprendre à résoudre des tâches en utilisant des concepts de «caractéristiques» très différents de ceux qui sont saillants pour les humains. En effet, c'est une source majeure de leur mauvaise généralisation sur les données hors distribution. En comprenant mieux les succès et les échecs de la généralisation DNN et les méthodes actuelles de spécification de nos intentions, nous visons à progresser vers des systèmes d'IA basés sur l'apprentissage en profondeur qui sont capables de comprendre les intentions des utilisateurs et d'agir en conséquence

    Distributionally Robust Chance-Constrained Optimal Power Flow Assuming Unimodal Distributions With Misspecified Modes

    No full text

    Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference

    Get PDF
    corecore