532 research outputs found

    Advanced Wide-Area Monitoring System Design, Implementation, and Application

    Get PDF
    Wide-area monitoring systems (WAMSs) provide an unprecedented way to collect, store and analyze ultra-high-resolution synchrophasor measurements to improve the dynamic observability in power grids. This dissertation focuses on designing and implementing a wide-area monitoring system and a series of applications to assist grid operators with various functionalities. The contributions of this dissertation are below: First, a synchrophasor data collection system is developed to collect, store, and forward GPS-synchronized, high-resolution, rich-type, and massive-volume synchrophasor data. a distributed data storage system is developed to store the synchrophasor data. A memory-based cache system is discussed to improve the efficiency of real-time situation awareness. In addition, a synchronization system is developed to synchronize the configurations among the cloud nodes. Reliability and Fault-Tolerance of the developed system are discussed. Second, a novel lossy synchrophasor data compression approach is proposed. This section first introduces the synchrophasor data compression problem, then proposes a methodology for lossy data compression, and finally presents the evaluation results. The feasibility of the proposed approach is discussed. Third, a novel intelligent system, SynchroService, is developed to provide critical functionalities for a synchrophasor system. Functionalities including data query, event query, device management, and system authentication are discussed. Finally, the resiliency and the security of the developed system are evaluated. Fourth, a series of synchrophasor-based applications are developed to utilize the high-resolution synchrophasor data to assist power system engineers to monitor the performance of the grid as well as investigate the root cause of large power system disturbances. Lastly, a deep learning-based event detection and verification system is developed to provide accurate event detection functionality. This section introduces the data preprocessing, model design, and performance evaluation. Lastly, the implementation of the developed system is discussed

    Scalable and Reliable Sparse Data Computation on Emergent High Performance Computing Systems

    Get PDF
    Heterogeneous systems with both CPUs and GPUs have become important system architectures in emergent High Performance Computing (HPC) systems. Heterogeneous systems must address both performance-scalability and power-scalability in the presence of failures. Aggressive power reduction pushes hardware to its operating limit and increases the failure rate. Resilience allows programs to progress when subjected to faults and is an integral component of large-scale systems, but incurs significant time and energy overhead. The future exascale systems are expected to have higher power consumption with higher fault rates. Sparse data computation is the fundamental kernel in many scientific applications. It is suitable for the studies of scalability and resilience on heterogeneous systems due to its computational characteristics. To deliver the promised performance within the given power budget, heterogeneous computing mandates a deep understanding of the interplay between scalability and resilience. Managing scalability and resilience is challenging in heterogeneous systems, due to the heterogeneous compute capability, power consumption, and varying failure rates between CPUs and GPUs. Scalability and resilience have been traditionally studied in isolation, and optimizing one typically detrimentally impacts the other. While prior works have been proved successful in optimizing scalability and resilience on CPU-based homogeneous systems, simply extending current approaches to heterogeneous systems results in suboptimal performance-scalability and/or power-scalability. To address the above multiple research challenges, we propose novel resilience and energy-efficiency technologies to optimize scalability and resilience for sparse data computation on heterogeneous systems with CPUs and GPUs. First, we present generalized analytical and experimental methods to analyze and quantify the time and energy costs of various recovery schemes, and develop and prototype performance optimization and power management strategies to improve scalability for sparse linear solvers. Our results quantitatively reveal that each resilience scheme has its own advantages depending on the fault rate, system size, and power budget, and the forward recovery can further benefit from our performance and power optimizations for large-scale computing. Second, we design a novel resilience technique that relaxes the requirement of synchronization and identicalness for processes, and allows them to run in heterogeneous resources with power reduction. Our results show a significant reduction in energy for unmodified programs in various fault situations compared to exact replication techniques. Third, we propose a novel distributed sparse tensor decomposition that utilizes an asynchronous RDMA-based approach with OpenSHMEM to improve scalability on large-scale systems and prove that our method works well in heterogeneous systems. Our results show our irregularity-aware workload partition and balanced-asynchronous algorithms are scalable and outperform the state-of-the-art distributed implementations. We demonstrate that understanding different bottlenecks for various types of tensors plays critical roles in improving scalability

    Grid-Connected Energy Storage Systems: State-of-the-Art and Emerging Technologies

    Get PDF
    High penetration of renewable energy resources in the power system results in various new challenges for power system operators. One of the promising solutions to sustain the quality and reliability of the power system is the integration of energy storage systems (ESSs). This article investigates the current and emerging trends and technologies for grid-connected ESSs. Different technologies of ESSs categorized as mechanical, electrical, electrochemical, chemical, and thermal are briefly explained. Especially, a detailed review of battery ESSs (BESSs) is provided as they are attracting much attention owing, in part, to the ongoing electrification of transportation. Then, the services that grid-connected ESSs provide to the grid are discussed. Grid connection of the BESSs requires power electronic converters. Therefore, a survey of popular power converter topologies, including transformer-based, transformerless with distributed or common dc-link, and hybrid systems, along with some discussions for implementing advanced grid support functionalities in the BESS control, is presented. Furthermore, the requirements of new standards and grid codes for grid-connected BESSs are reviewed for several countries around the globe. Finally, emerging technologies, including flexible power control of photovoltaic systems, hydrogen, and second-life batteries from electric vehicles, are discussed in this article.This work was supported in part by the Office of Naval Research Global under Grant N62909-19-1-2081, in part by the National Research Foundation of Singapore Investigatorship under Award NRFI2017-08, and in part by the I2001E0069 Industrial Alignment Funding. (Corresponding author: Josep Pou.

    Computational and Near-Optimal Trade-Offs in Renewable Electricity System Modelling

    Get PDF
    In the decades to come, the European electricity system must undergo an unprecedented transformation to avert the devastating impacts of climate change. To devise various possibilities for achieving a sustainable yet cost-efficient system, in the thesis at hand, we solve large optimisation problems that coordinate the siting of generation, storage and transmission capacities. Thereby, it is critical to capture the weather-dependent variability of wind and solar power as well as transmission bottlenecks. In addition to modelling at high spatial and temporal resolution, this requires a detailed representation of the electricity grid. However, since the resulting computational challenges limit what can be investigated, compromises on model accuracy must be made, and methods from informatics become increasingly relevant to formulate models efficiently and to compute many scenarios. The first part of the thesis is concerned with justifying such trade-offs between model detail and solving times. The main research question is how to circumvent some of the challenging non-convexities introduced by transmission network representations in joint capacity expansion models while still capturing the core grid physics. We first examine tractable linear approximations of power flow and transmission losses. Subsequently, we develop an efficient reformulation of the discrete transmission expansion planning (TEP) problem based on a cycle decomposition of the network graph, which conveniently also accommodates grid synchronisation options. Because discrete investment decisions aggravate the problem\u27s complexity, we also cover simplifying heuristics that make use of sequential linear programming (SLP) and retrospective discretisation techniques. In the second half, we investigate other trade-offs, namely between least-cost and near-optimal solutions. We systematically explore broad ranges of technologically diverse system configurations that are viable without compromising the system\u27s overall cost-effectiveness. For example, we present solutions that avoid installing onshore wind turbines, bypass new overhead transmission lines, or feature a more regionally balanced distribution of generation capacities. Such alternative designs may be more widely socially accepted, and, thus, knowing about these degrees of freedom is highly policy-relevant. The method we employ to span the space of near-optimal solutions is related to modelling-to-generate-alternatives, a variant of multi-objective optimisation. The robustness of our results is further strengthened by considering technology cost uncertainties. To efficiently sweep the cost parameter space, we leverage multi-fidelity surrogate modelling techniques using sparse polynomial chaos expansion in combination with low-discrepancy sampling and extensive parallelisation on high-performance computing infrastructure

    On the nature and effect of power distribution noise in CMOS digital integrated circuits

    Get PDF
    The thesis reports on the development of a novel simulation method aimed at modelling power distribution noise generated in digital CMOS integrated circuits. The simulation method has resulted in new information concerning: 1. The magnitude and nature of the power distribution noise and its dependence on the performance and electrical characteristics of the packaged integrated circuit. Emphasis is laid on the effects of resistive, capacitative and inductive elements associated with the packaged circuit. 2. Power distribution noise associated with a generic systolic array circuit comprising 1,020,000 transistors, of which 510,000 are synchronously active. The circuit is configured as a linear array which, if fabricated using two-micron bulk CMOS technology, would be over eight centimetres long and three millimetres wide. In principle, the array will perform 1.5 x 10 to the power of 11 operations per second. 3. Power distribution noise associated with a non-array-based signal processor which, if fabricated in 2-micron bulk CMOS technology, would occupy 6.7 sq. cm. The circuit contains about 900,000 transistors, of which 600,000 are functional and about 300,000 are used for yield enhancement. The processor uses the RADIX-2 algorithm and is designed to achieve 2 x 10 to the power of 8 floating point operations per second. 4. The extent to which power distribution noise limits the level of integration and/ or performance of such circuits using standard and non-standard fabrication and packaging technology. 5. The extent to which the predicted power distribution noise levels affect circuit susceptibility to transient latch-up and electromigration. It concludes the nature of CMOS digital integrated circuit power distribution noise and recommends ways in which it may be minimised. It outlines an approach aimed at mechanising the developed simulation methodology so that the performance of power distribution networks may more routinely be assessed. Finally. it questions the long term suitability of mainly digital techniques for signal processing

    Resilience for large ensemble computations

    Get PDF
    With the increasing power of supercomputers, ever more detailed models of physical systems can be simulated, and ever larger problem sizes can be considered for any kind of numerical system. During the last twenty years the performance of the fastest clusters went from the teraFLOPS domain (ASCI RED: 2.3 teraFLOPS) to the pre-exaFLOPS domain (Fugaku: 442 petaFLOPS), and we will soon have the first supercomputer with a peak performance cracking the exaFLOPS (El Capitan: 1.5 exaFLOPS). Ensemble techniques experience a renaissance with the availability of those extreme scales. Especially recent techniques, such as particle filters, will benefit from it. Current ensemble methods in climate science, such as ensemble Kalman filters, exhibit a linear dependency between the problem size and the ensemble size, while particle filters show an exponential dependency. Nevertheless, with the prospect of massive computing power come challenges such as power consumption and fault-tolerance. The mean-time-between-failures shrinks with the number of components in the system, and it is expected to have failures every few hours at exascale. In this thesis, we explore and develop techniques to protect large ensemble computations from failures. We present novel approaches in differential checkpointing, elastic recovery, fully asynchronous checkpointing, and checkpoint compression. Furthermore, we design and implement a fault-tolerant particle filter with pre-emptive particle prefetching and caching. And finally, we design and implement a framework for the automatic validation and application of lossy compression in ensemble data assimilation. Altogether, we present five contributions in this thesis, where the first two improve state-of-the-art checkpointing techniques, and the last three address the resilience of ensemble computations. The contributions represent stand-alone fault-tolerance techniques, however, they can also be used to improve the properties of each other. For instance, we utilize elastic recovery (2nd contribution) for mitigating resiliency in an online ensemble data assimilation framework (3rd contribution), and we built our validation framework (5th contribution) on top of our particle filter implementation (4th contribution). We further demonstrate that our contributions improve resilience and performance with experiments on various architectures such as Intel, IBM, and ARM processors.Amb l’increment de les capacitats de còmput dels supercomputadors, es poden simular models de sistemes físics encara més detallats, i es poden resoldre problemes de més grandària en qualsevol tipus de sistema numèric. Durant els últims vint anys, el rendiment dels clústers més ràpids ha passat del domini dels teraFLOPS (ASCI RED: 2.3 teraFLOPS) al domini dels pre-exaFLOPS (Fugaku: 442 petaFLOPS), i aviat tindrem el primer supercomputador amb un rendiment màxim que sobrepassa els exaFLOPS (El Capitan: 1.5 exaFLOPS). Les tècniques d’ensemble experimenten un renaixement amb la disponibilitat d’aquestes escales tan extremes. Especialment les tècniques més noves, com els filtres de partícules, se¿n beneficiaran. Els mètodes d’ensemble actuals en climatologia, com els filtres d’ensemble de Kalman, exhibeixen una dependència lineal entre la mida del problema i la mida de l’ensemble, mentre que els filtres de partícules mostren una dependència exponencial. No obstant, juntament amb les oportunitats de poder computar massivament, apareixen desafiaments com l’alt consum energètic i la necessitat de tolerància a errors. El temps de mitjana entre errors es redueix amb el nombre de components del sistema, i s’espera que els errors s’esdevinguin cada poques hores a exaescala. En aquesta tesis, explorem i desenvolupem tècniques per protegir grans càlculs d’ensemble d’errors. Presentem noves tècniques en punts de control diferencials, recuperació elàstica, punts de control totalment asincrònics i compressió de punts de control. A més, dissenyem i implementem un filtre de partícules tolerant a errors amb captació i emmagatzematge en caché de partícules de manera preventiva. I finalment, dissenyem i implementem un marc per la validació automàtica i l’aplicació de compressió amb pèrdua en l’assimilació de dades d’ensemble. En total, en aquesta tesis presentem cinc contribucions, les dues primeres de les quals milloren les tècniques de punts de control més avançades, mentre que les tres restants aborden la resiliència dels càlculs d’ensemble. Les contribucions representen tècniques independents de tolerància a errors; no obstant, també es poden utilitzar per a millorar les propietats de cadascuna. Per exemple, utilitzem la recuperació elàstica (segona contribució) per a mitigar la resiliència en un marc d’assimilació de dades d’ensemble en línia (tercera contribució), i construïm el nostre marc de validació (cinquena contribució) sobre la nostra implementació del filtre de partícules (quarta contribució). A més, demostrem que les nostres contribucions milloren la resiliència i el rendiment amb experiments en diverses arquitectures, com processadors Intel, IBM i ARM.Postprint (published version

    Open Problems in (Hyper)Graph Decomposition

    Full text link
    Large networks are useful in a wide range of applications. Sometimes problem instances are composed of billions of entities. Decomposing and analyzing these structures helps us gain new insights about our surroundings. Even if the final application concerns a different problem (such as traversal, finding paths, trees, and flows), decomposing large graphs is often an important subproblem for complexity reduction or parallelization. This report is a summary of discussions that happened at Dagstuhl seminar 23331 on "Recent Trends in Graph Decomposition" and presents currently open problems and future directions in the area of (hyper)graph decomposition
    • …
    corecore