561 research outputs found

    Quantities in Games and Modal Transition Systems

    Get PDF

    Supplement to “program evaluation and causal inference with high-dimensional data"

    Full text link
    Supplemental appendices to: A Belloni, V Chernozhukov, I Fernández-Val, C Hansen. 2017. "Program evaluation and causal inference with high-dimensional data." Econometrica, Volume 85, Issue 1, pp. 233-298. https://doi.org/10.3982/ECTA12723https://arxiv.org/abs/1311.2645Supporting documentatio

    Stability Analysis of Converter Control Strategies for Power Electronics-Dominated Power Systems

    Get PDF
    The electric power system, whose well-established structure consolidated over decades of studies is composed of large centralized generating units, transmission systems, and distributed loads, is currently experiencing a significant transformation, posing new challenges for its safe operation in the near future. The increasing amount of grid-connected power electronics-based converters associated with renewable energy sources, is reducing the amount of energy produced by means of conventional generating units, generally represented by large synchronous machines (SMs) directly connected to the grid. As a consequence, declining system inertia, as well as reduced fault currents affecting short-circuit level and retained voltage under fault conditions, are expected. This has caused concerns among system operators (SOs) worldwide about the stability of the future power system, triggering discussions in different countries about the need for new converter control strategies, which would allow safe system operation under the expected grid configuration. In this scenario, the concept of ”grid-forming (GFM) converters” has been recently proposed as a possible solution allowing high-penetration of power electronics-based generation. Initially introduced in the context of microgrids, the concept of GFM converters needs to be reviewed for applications in wide interconnected systems. Indeed, at the present time, a well-established formulation is still missing in the literature, and several committees worldwide are currently working on a definition for identifying the characteristics of such converters. Due to the initial concern of SOs related to declining system inertia, the concept of GFM converters has been often associated with the idea of virtual inertia, and namely the emulation of a synthetic inertial response by means of a power electronics-based converter. Yet, this is only one aspect related to the increase of power electronics-based generation, and the concept of a GFM converter includes other features, which, however, need to be properly specified in order to provide clear guidelines for manufacturers aiming to the development of suitable converter control strategies. This thesis addresses the topic of GFM converters from a control perspective, and aims to characterize potential features, as well as the relevant issues related to this technology. First, the characteristics of a GFM converter are identified according to an extensive literature overview, so that by reviewing international practice on this technology, a general formulation for a GFM converter control structure is identified. Particular emphasis is given to the synchronization principle adopted by the converter which, contrary to state-of-the-art grid-connected converters adopting a dedicated unit for grid synchronization purposes, is generally achieved in a GFM converter by reproducing the power-synchronization mechanism of a SM. An extensive small-signal stability analysis is performed in order to identify the implications of the identified converter behaviour on converter stability, as well as the effects due to the interactions between converters operating nearby. Finally, potential issues related to the implementation of a GFM converter are highlighted, and possible solutions are proposed, whose effectiveness is validated by means of hardware-in-the-loop (HIL) simulations, as well as experimental tests in a laboratory environment, by adopting power-HIL (PHIL) test benches

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering, FASE 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 23 full papers, 1 tool paper and 6 testing competition papers presented in this volume were carefully reviewed and selected from 81 submissions. The papers cover topics such as requirements engineering, software architectures, specification, software quality, validation, verification of functional and non-functional properties, model-driven development and model transformation, software processes, security and software evolution

    Program Evaluation and Causal Inference with High-Dimensional Data

    Full text link
    In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data-rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function-valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post-regularization and post-selection inference that are uniformly valid (honest) across a wide-range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets.Comment: 118 pages, 3 tables, 11 figures, includes supplementary appendix. This version corrects some typos in Example 2 of the published versio

    Contributions to impedance shaping control techniques for power electronic converters

    Get PDF
    El conformado de la impedancia o admitancia mediante control para convertidores electrónicos de potencia permite alcanzar entre otros objetivos: mejora de la robustez de los controles diseñados, amortiguación de la dinámica de la tensión en caso de cambios de carga, y optimización del filtro de red y del controlador en un solo paso (co-diseño). La conformación de la impedancia debe ir siempre acompañada de un buen seguimiento de referencias. Por tanto, la idea principal es diseñar controladores con una estructura sencilla que equilibren la consecución de los objetivos marcados en cada caso. Este diseño se realiza mediante técnicas modernas, cuya resolución (síntesis del controlador) requiere de herramientas de optimización. La principal ventaja de estas técnicas sobre las clásicas, es decir, las basadas en soluciones algebraicas, es su capacidad para tratar problemas de control complejos (plantas de alto orden y/o varios objetivos) de una forma considerablemente sistemática. El primer problema de control por conformación de la impedancia consiste en reducir el sobreimpulso de tensión ante cambios de carga y minimizar el tamaño de los componentes del filtro pasivo en los convertidores DC-DC. Posteriormente, se diseñan controladores de corriente y tensión para un inversor DC-AC trifásico que logren una estabilidad robusta del sistema para una amplia variedad de filtros. La condición de estabilidad robusta menos conservadora, siendo la impedancia de la red la principal fuente de incertidumbre, es el índice de pasividad. En el caso de los controladores de corriente, el impacto de los lazos superiores en la estabilidad basada en la impedancia también se analiza mediante un índice adicional: máximo valor singular. Cada uno de los índices se aplica a un rango de frecuencias determinado. Finalmente, estas condiciones se incluyen en el diseño en un solo paso del controlador de un convertidor back-to-back utilizado para operar generadores de inducción doblemente alimentados (aerogeneradores tipo 3) presentes en algunos parques eólicos. Esta solución evita los problemas de oscilación subsíncrona, derivados de las líneas de transmisión con condensadores de compensación en serie, a los que se enfrentan estos parques eólicos. Los resultados de simulación y experimentales demuestran la eficacia y versatilidad de la propuesta.Impedance or admittance shaping by control for power electronic converters allows to achieve among other objectives: robustness enhancement of the designed controls, damped voltage dynamics in case of load changes, and grid filter and controller optimization in a single step (co-design). Impedance shaping must always be accompanied by a correct reference tracking performance. Therefore, the main idea is to design controllers with a simple structure that balance the achievement of the objectives set in each case. This design is carried out using modern techniques, whose resolution (controller synthesis) requires optimization tools. The main advantage of these techniques over the classical ones, i.e. those based on algebraic solutions, is their ability to deal with complex control problems (high order plants and/or several objectives) in a considerably systematic way. The first impedance shaping control problem is to reduce voltage overshoot under load changes and minimize the size of passive filter components in DC-DC converters. Subsequently, current and voltage controllers for a three-phase DC-AC inverter are designed to achieve robust system stability for a wide variety of filters. The least conservative robust stability condition, with grid impedance being the main source of uncertainty, is the passivity index. In the case of current controllers, the impact of higher loops on impedance-based stability is also analyzed by an additional index: maximum singular value. Each of the indices is applied to a given frequency range. Finally, these conditions are included in the one-step design of the controller of a back-to-back converter used to operate doubly fed induction generators (type-3 wind turbines) present in some wind farms. This solution avoids the sub-synchronous oscillation problems, derived from transmission lines with series compensation capacitors, faced by these wind farms. Simulation and experimental results demonstrate the effectiveness and versatility of the proposa

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.Peer Reviewed"Article signat per 36 autors/es: Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, Luca Bonaventura, Hans-Joachim Bungartz, Sanjay Chatterjee, Florina M. Ciorba, Nathan DeBardeleben, Daniel Drzisga, Sebastian Eibl, Christian Engelmann, Wilfried N. Gansterer, Luc Giraud, Dominik G ̈oddeke, Marco Heisig, Fabienne Jezequel, Nils Kohl, Xiaoye Sherry Li, Romain Lion, Miriam Mehl, Paul Mycek, Michael Obersteiner, Enrique S. Quintana-Ortiz, Francesco Rizzi, Ulrich Rude, Martin Schulz, Fred Fung, Robert Speck, Linda Stals, Keita Teranishi, Samuel Thibault, Dominik Thonnes, Andreas Wagner and Barbara Wohlmuth"Postprint (author's final draft
    corecore