37 research outputs found

    An Analytical Framework for Evaluating the Impact of Distribution-Level LVRT Response on Transmission System Security

    Get PDF
    Low voltage ride through (LVRT) is a solution to increase the tolerance of distributed energy resources (DERs) against the voltage sags. However, the possibility of DERs trip according to the present grid codes exists. Such trips are essential for transmission systems with connected DER-penetrated distribution networks (DPDNs). This paper investigates an analytical framework to see the impact of distribution-level LVRT response on transmission system security. LVRT response stands for the total amount of lost DER capacity due to the inability to meet the LVRT requirement during the voltage sag. This generation loss in the distribution sector can expose the transmission network to lines overloading after fault clearance. The proposed novel approach is based on a source contingency analysis that lets TSOs conduct an LVRT-oriented security assessment. A mathematical function is defined as the LVRT response function of DPDNs. This function gives the lost DER capacity in response to the transmission level transient faults and is constructed by distribution system operators (DSOs). The TSO can use these functions to assess the loading security of transmission lines in post-clearance conditions. In this analytical framework, LVRT-oriented security is evaluated by calculating the risk of lines overloading under a large number of random faults.The proposed approach is implemented in two test power systems with a considerable DER penetration level to obtain the risk of line overloading due to the LVRT response in distribution networks.Ā©2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.fi=vertaisarvioitu|en=peerReviewed

    A general mathematical model for LVRT capability assessment of DER-penetrated distribution networks

    Get PDF
    Low voltage ride through (LVRT) is one of the indispensable issues of recent decade in the context of grid codes. LVRT stands for the ability of a generation facility to stay connected during the voltage dip. Despite the numerous discussions in recent works, but they mostly concentrate on the LVRT-based control of distributed energy resources (DERs) integrated into a microgrid and its improvement. However, what has been hidden and not addressed any more yet is an index to measure the LVRT capability of a DER-penetrated distribution network (DPDN) under different voltage sags. This takes precedence when we want to evaluate the LVRT capability of DPDNs with consideration of various LVRT categories of DERs mandated in IEEE 1547 standard. This paper introduces a general framework for LVRT assessment of a DPDN by solving a system of differential algebraic equations (DAEs). Then expected LVRT capability of a DPDN is evaluated by a proposed LVRT index through the implementation of Monte Carlo simulation technique.This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/fi=vertaisarvioitu|en=peerReviewed

    Proactive cloud management for highly heterogeneous multi-cloud infrastructures

    Get PDF
    Various literature studies demonstrated that the cloud computing paradigm can help to improve availability and performance of applications subject to the problem of software anomalies. Indeed, the cloud resource provisioning model enables users to rapidly access new processing resources, even distributed over different geographical regions, that can be promptly used in the case of, e.g., crashes or hangs of running machines, as well as to balance the load in the case of overloaded machines. Nevertheless, managing a complex geographically-distributed cloud deploy could be a complex and time-consuming task. Autonomic Cloud Manager (ACM) Framework is an autonomic framework for supporting proactive management of applications deployed over multiple cloud regions. It uses machine learning models to predict failures of virtual machines and to proactively redirect the load to healthy machines/cloud regions. In this paper, we study different policies to perform efficient proactive load balancing across cloud regions in order to mitigate the effect of software anomalies. These policies use predictions about the mean time to failure of virtual machines. We consider the case of heterogeneous cloud regions, i.e regions with different amount of resources, and we provide an experimental assessment of these policies in the context of ACM Framework

    A fault-tolerance protocol for parallel applications with communication imbalance

    Get PDF
    ArticuloThe predicted failure rates of future supercomputers loom the groundbreaking research large machines are expected to foster. Therefore, resilient extreme-scale applications are an absolute necessity to effectively use the new generation of supercomputers. Rollback-recovery techniques have been traditionally used in HPC to provide resilience. Among those techniques, message logging provides the appealing features of saving energy, accelerating recovery, and having low performance penalty. Its increased memory consumption is, however, an important downside. This paper introduces memory-constrained message logging (MCML), a general framework for decreasing the memory footprint of message-logging protocols. In particular, we demonstrate the effectiveness of MCML in maintaining message logging feasible for applications with substantial communication imbalance. This type of applications appear in many scientific fields. We present experimental results with several parallel codes running on up to 4,096 cores. Using those results and an analytical model, we predict MCML can reduce execution time up to 25% and energy consumption up to 15%, at extreme scale

    Proactive Scalability and Management of Resources in Hybrid Clouds via Machine Learning

    Get PDF
    In this paper, we present a novel framework for supporting the management and optimization of application subject to software anomalies and deployed on large scale cloud architectures, composed of different geographically distributed cloud regions. The framework uses machine learning models for predicting failures caused by accumulation of anomalies. It introduces a novel workload balancing approach and a proactive system scale up/scale down technique. We developed a prototype of the framework and present some experiments for validating the applicability of the proposed approache

    Promocijas darbs

    Get PDF
    Elektroniskā versija nesatur pielikumusLiela daļa fāgu genomu datu pieder t.s. ā€œbiosfēras tumÅ”ajai matērijaiā€, kam ir zema lÄ«dzÄ«ba ar izpētÄ«tajām sekvencēm. Hafnia vÄ«russ Enc34, ko atklāja mÅ«su zinātniskā grupa, kodē vairākus noslēpumainus, tomēr evolÅ«cijas gaitā saglabājuÅ”os proteÄ«nus, tostarp ORF6 hipotētisko proteÄ«nu un ORF39 potenciālo endolizÄ«nu. Mēs atklājām, ka ORF6 ir vienpavediena DNS saistoÅ”ais proteÄ«ns (SSB), kā arÄ« noskaidrojām DNS saistÄ«bas strukturālos pamatus lielai SSB saimei. ORF39 produktam tika apstiprināta muralÄ«tiskā aktivitāte, un tā trÄ«sdimensionālā struktÅ«ra atklāja jaunu lizocÄ«ma folda variāciju. Darba rezultātā DUF2815 un PHA02564 sekvenču kodi genomu tumÅ”ajā matērijā ir tikuÅ”i viennozÄ«mÄ«gi raksturoti kā attiecÄ«gi SSB un endolizÄ«ni. Atslēgvārdi: fāgi, genomu tumŔā matērija, rentgenstaru kristalogrāfija, vienpavediena DNS saistoÅ”ie proteÄ«ni, endolizÄ«ni.A large section of phage genomic data constitutes the ā€œdark matter of the biosphereā€, having low similarity to any known sequences. Hafnia virus Enc34, discovered by our laboratory, encodes several enigmatic yet evolutionarily conserved proteins, including the ORF6 hypothetical protein and the ORF39 putative endolysin. We discovered ORF6 to be a single-stranded DNA-binding protein (SSB) and elucidated the structural basis of DNA binding for a major SSB family. The ORF39 product was verified to have muralytic activity and its three-dimensional structured unveiled a new variation of the lysozyme fold. Altogether, DUF2815 and PHA02564 descriptors in the genomic dark matter have now been definitively illuminated as SSBs and endolysins, respectively. Keywords: phages, genomic dark matter, X-ray crystallography, single-stranded DNA-binding proteins, endolysins

    Machine Learning for Achieving Self-* Properties and Seamless Execution of Applications in the Cloud

    Get PDF
    Software anomalies are recognized as a major problem affecting the performance and availability of many computer systems. Accumulation of anomalies of different nature, such as memory leaks and unterminated threads, may lead the system to both fail or work with suboptimal performance levels. This problem particularly affects web servers, where hosted applications are typically intended to continuously run, thus incrementing the probability, therefore the associated effects, of accumulation of anomalies. Given the unpredictability of occurrence of anomalies, continuous system monitoring would be required to detect possible system failures and/or excessive performance degradation in order to timely start some recovering procedure. In this paper, we present a Machine Learning-based framework for proactive management of client-server applications in the cloud. Through optimized Machine Learning models and continually measuring system features, the framework predicts the remaining time to the occurrence of some unexpected event (system failure, service level agreement violation, etc.) of a virtual machine hosting a server instance of the application. The framework is able to manage virtual machines in the presence of different types anomalies and with different anomaly occurrence patterns. We show the effectiveness of the proposed solution by presenting results of a set of experiments we carried out in the context of a real world-inspired scenario

    Maximizing Service Reliability in Distributed Computing Systems with Random Node Failures: Theory and Implementation

    Get PDF
    In distributed computing systems (DCSs) where server nodes can fail permanently with nonzero probability, the system performance can be assessed by means of the service reliability, defined as the probability of serving all the tasks queued in the DCS before all the nodes fail. This paper presents a rigorous probabilistic framework to analytically characterize the service reliability of a DCS in the presence of communication uncertainties and stochastic topological changes due to node deletions. The framework considers a system composed of heterogeneous nodes with stochastic service and failure times and a communication network imposing random tangible delays. The framework also permits arbitrarily specified, distributed load-balancing actions to be taken by the individual nodes in order to improve the service reliability. The presented analysis is based upon a novel use of the concept of stochastic regeneration, which is exploited to derive a system of difference-differential equations characterizing the service reliability. The theory is further utilized to optimize certain load-balancing policies for maximal service reliability; the optimization is carried out by means of an algorithm that scales linearly with the number of nodes in the system. The analytical model is validated using both Monte Carlo simulations and experimental data collected from a DCS testbed
    corecore