65 research outputs found
Accelerating Optimal Power Flow with GPUs: SIMD Abstraction of Nonlinear Programs and Condensed-Space Interior-Point Methods
This paper introduces a framework for solving alternating current optimal
power flow (ACOPF) problems using graphics processing units (GPUs). While GPUs
have demonstrated remarkable performance in various computing domains, their
application in ACOPF has been limited due to challenges associated with porting
sparse automatic differentiation (AD) and sparse linear solver routines to
GPUs. We address these issues with two key strategies. First, we utilize a
single-instruction, multiple-data abstraction of nonlinear programs. This
approach enables the specification of model equations while preserving their
parallelizable structure and, in turn, facilitates the parallel AD
implementation. Second, we employ a condensed-space interior-point method (IPM)
with an inequality relaxation. This technique involves condensing the
Karush--Kuhn--Tucker (KKT) system into a positive definite system. This
strategy offers the key advantage of being able to factorize the KKT matrix
without numerical pivoting, which has hampered the parallelization of the IPM
algorithm. By combining these strategies, we can perform the majority of
operations on GPUs while keeping the data residing in the device memory only.
Comprehensive numerical benchmark results showcase the advantage of our
approach. Remarkably, our implementations -- MadNLP.jl and ExaModels.jl --
running on NVIDIA GPUs achieve an order of magnitude speedup compared with
state-of-the-art tools running on contemporary CPUs.Comment: Accepted for publication in PSCC 202
Hommage à André Leroi-Gourhan
La technologie, subordonnée de fait à la science et à l'épistémologie depuis la tradition grecque, s'est récemment donné les moyens d'un travail autonome, paritaire à celles-ci et complémentaire. À côté des sciences de la nature se font donc place celles de conception, tant compréhension que prime réalisation, de tous genres d'artefacts. Et il se trouve, à partir également des observations de G. Bachelard sur la rationalité mécanicienne que les techniciens / systématiciens peuvent apporter aux ethnologues de fructueuses méthodes d'approches relatives à divers systèmes, dont les seconds sont probablement parmi les interrogateurs les plus sensibles actuellement.Cette démarche est illustrée - en hommage à A. Leroi-Gourhan et par raison de simplicité - par des observations sur les outils. Et notamment sur l'emploi du terme « percussion posée » pour leur classement, mais aussi sur l'intérêt de divers couples de termes antonimyques comme : outils / armes, tirer / pousser, etc.Technology, occuping a subordinate position to science and epistemology since the times of the Greeks. As recently acquired the means for doing independent work, on an equal and complementary basis with these fields. Next to natural sciences, then, are arising sciences of conceving all kinds of artifacts understanding, as well as first hand construction. Thus technicians and systems —specialists can make a contribution of anthropological research, the remarks of G. Bachelard on mechanical rationality can also be used as a starting point, producing methodes for approaching different systems— anthropologists today probably being among the keenest researchers in this field. This approach is illustrated —as a tribute to A. Leroi Gourhan and for simplicity's sake— by remarks about tools, which figure among the more simple systems. And especially concerning the use of the term « contact percussion » as a classifier, but also pairs of antonym such as : tools/weapons, push/pull..
Upper and Lower Bounds for Large Scale Multistage Stochastic Optimization Problems: Application to Microgrid Management
We consider a microgrid where different prosumers exchange energy altogether by the edges of a given network. Each prosumer is located to a node of the network and encompasses energy consumption, energy production and storage capacities (battery, electrical hot water tank). The problem is coupled both in time and in space, so that a direct resolution of the problem for large microgrids is out of reach (curse of dimensionality). By affecting price or resources to each node in the network and resolving each nodal sub-problem independently by Dynamic Programming, we provide decomposition algorithms that allow to compute a set of decomposed local value functions in a parallel manner. By summing the local value functions together, we are able, on the one hand, to obtain upper and lower bounds for the optimal value of the problem, and, on the other hand, to design global admissible policies for the original system. Numerical experiments are conducted on microgrids of different size, derived from data given by the research and development centre Efficacity, dedicated to urban energy transition. These experiments show that the decomposition algorithms give better results than the standard SDDP method, both in terms of bounds and policy values. Moreover, the decomposition methods are much faster than the SDDP method in terms of computation time, thus allowing to tackle problem instances incorporating more than 60 state variables in a Dynamic Programming framework
Upper and Lower Bounds for Large Scale Multistage Stochastic Optimization Problems: Decomposition Methods
We consider a large scale multistage stochastic optimization problem involving multiple units. Each unit is a (small) control system. Static constraints couple units at each stage. To tackle such large scale problems, we propose two decomposition methods, whether handling the coupling constraints by prices or by resources. We introduce the sequence (one per stage) of global Bellman functions, depending on the collection of local states of all units. We show that every Bellman function is bounded above by a sum of local resource-decomposed value functions, and below by a sum of local price-decomposed value functions-each local decomposed function having for arguments the corresponding local unit state variables. We provide conditions under which these local value functions can be computed by Dynamic Programming. These conditions are established assuming a centralized information structure, that is, when the information available for each unit consists of the collection of noises affecting all the units. We finally study the case where each unit only observes its own local noise (decentralized information structure)
Batched Second-Order Adjoint Sensitivity for Reduced Space Methods
This paper presents an efficient method for extracting the second-order
sensitivities from a system of implicit nonlinear equations on upcoming
graphical processing units (GPU) dominated computer systems. We design a custom
automatic differentiation (AutoDiff) backend that targets highly parallel
architectures by extracting the second-order information in batch. When the
nonlinear equations are associated to a reduced space optimization problem, we
leverage the parallel reverse-mode accumulation in a batched adjoint-adjoint
algorithm to compute efficiently the reduced Hessian of the problem. We apply
the method to extract the reduced Hessian associated to the balance equations
of a power network, and show on the largest instances that a parallel GPU
implementation is 30 times faster than a sequential CPU reference based on
UMFPACK.Comment: SIAM-PP2
Retour sur l'Assemblée des carnetiers d'Hypothèses 2015-2016
Ce billet présente les actes de l'Assemblée des carnetiers d'Hypothèses 2015-2016, qui s'est tenue le 20 novembre 2015, au siège du CNRS à Paris. https://www.youtube.com/watch?v=slM5az5CMqg Introduction générale - Didier Torny (InSHS, DAS en charge des politique IST) Hypothèses est une plateforme essentielle qui constitue un instrument central pour le rayonnement des SHS. Ses contenus sont accessibles par un large public (sphère académique, professionnels et grand public). La plate..
Blogaward 2014 pour les carnets germanophones #dehypoAward
Ce billet est une traduction et une adaptation réalisée par Sandra Guigonis, à partir de http://redaktionsblog.hypotheses.org/1971 et http://dhiha.hypotheses.org/1349 -- Le 9 mars 2014, le portail de.hypotheses fêtera son deuxième anniversaire. C’est pour nous [équipe du portail germanophone d'hypothèses] une bonne occasion d’attirer l’attention sur le blogging scientifique et la communauté germanophone d’Hypothèses. Comme l’année dernière, nous l'avons fait avec un prix, le « Blogaward ». Qu..
OpenEdition recrute un.e chargé.e de l’accompagnement des communautés et de la valorisation des contenus d’Hypothèses
OpenEdition recrute un.e chargé.e de l’accompagnement des communautés et de la valorisation des contenus de la plateforme de blogging scientifique Hypothèses. Disponibilité : 1er juin 2016 Contrat  : CDD de 12 mois (renouvelable) à l’Université d’Aix-Marseille Rémunération : 1424 € à 1724 € net/mois selon l’expérience du candidat Niveau d’études minimum requis : licence Branche d’activité professionnelle : F – Information, Documentation, Culture, Communication, Édition, TICE Corps : Ingénieur..
- …