454,408 research outputs found

    Adequacy Evaluation of an Islanded Microgrid

    Get PDF
    The reliability of power converters has been extensively examined in terms of component- and converter level. However, in case of multiple generation units, the evaluation of the performance of power systems requires system-level modeling. This paper aims to merge the prior art of reliability modeling of power converters with the adequacy evaluation of power systems through an extensive design and evaluation analysis of a microgrid based case study. The methodology proposed in the paper integrates the device-level analysis into the domain of the conventional power system reliability analysis while outlining the steps needed to deal with non-exponential distributed failures of power electronic-based generation units. A replacement policy of the power electronic-based units is adopted by means of evaluating the system risk of not supplying system loads, and, finally, an approach on how to ensure a desired replacement frequency is outlined

    Monitoring and Optimization of ATLAS Tier 2 Center GoeGrid

    Get PDF
    The demand on computational and storage resources is growing along with the amount of infor- mation that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid perfor- mance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and ser- viceability of the computing platform can be measured according to the constant and stable provisioning of the services. Due to the widely used concept of the service oriented architecture (SOA) for large computing facilities, in advance knowing of the service state as well as the quick and accurate detection of its disability allows to perform the proactive management of the com- puting facility. The proactive management is considered as a core component of the computing facility management automation concept, such as Autonomic Computing. Thus in time as well as in advance and accurate identification of the provided service status can be considered as a contribution to the computing facility management automation, which is directly related to the provisioning of the stable and reliable computing resources. Based on the case studies, performed using the GoeGrid monitoring data, consideration of the approaches as generalized methods for the accurate and fast identification and prediction of the service status is reasonable. Simplicity and low consumption of the computing resources allow to consider the methods in the scope of the Autonomic Computing component

    RELIABILITY AND RISK ASSESSMENT OF NETWORKED URBAN INFRASTRUCTURE SYSTEMS UNDER NATURAL HAZARDS

    Get PDF
    Modern societies increasingly depend on the reliable functioning of urban infrastructure systems in the aftermath of natural disasters such as hurricane and earthquake events. Apart from a sizable capital for maintenance and expansion, the reliable performance of infrastructure systems under extreme hazards also requires strategic planning and effective resource assignment. Hence, efficient system reliability and risk assessment methods are needed to provide insights to system stakeholders to understand infrastructure performance under different hazard scenarios and accordingly make informed decisions in response to them. Moreover, efficient assignment of limited financial and human resources for maintenance and retrofit actions requires new methods to identify critical system components under extreme events. Infrastructure systems such as highway bridge networks are spatially distributed systems with many linked components. Therefore, network models describing them as mathematical graphs with nodes and links naturally apply to study their performance. Owing to their complex topology, general system reliability methods are ineffective to evaluate the reliability of large infrastructure systems. This research develops computationally efficient methods such as a modified Markov Chain Monte Carlo simulations algorithm for network reliability, and proposes a network reliability framework (BRAN: Bridge Reliability Assessment in Networks) that is applicable to large and complex highway bridge systems. Since the response of system components to hazard scenario events are often correlated, the BRAN framework enables accounting for correlated component failure probabilities stemming from different correlation sources. Failure correlations from non-hazard sources are particularly emphasized, as they potentially have a significant impact on network reliability estimates, and yet they have often been ignored or only partially considered in the literature of infrastructure system reliability. The developed network reliability framework is also used for probabilistic risk assessment, where network reliability is assigned as the network performance metric. Risk analysis studies may require prohibitively large number of simulations for large and complex infrastructure systems, as they involve evaluating the network reliability for multiple hazard scenarios. This thesis addresses this challenge by developing network surrogate models by statistical learning tools such as random forests. The surrogate models can replace network reliability simulations in a risk analysis framework, and significantly reduce computation times. Therefore, the proposed approach provides an alternative to the established methods to enhance the computational efficiency of risk assessments, by developing a surrogate model of the complex system at hand rather than reducing the number of analyzed hazard scenarios by either hazard consistent scenario generation or importance sampling. Nevertheless, the application of surrogate models can be combined with scenario reduction methods to improve even further the analysis efficiency. To address the problem of prioritizing system components for maintenance and retrofit actions, two advanced metrics are developed in this research to rank the criticality of system components. Both developed metrics combine system component fragilities with the topological characteristics of the network, and provide rankings which are either conditioned on specific hazard scenarios or probabilistic, based on the preference of infrastructure system stakeholders. Nevertheless, they both offer enhanced efficiency and practical applicability compared to the existing methods. The developed frameworks for network reliability evaluation, risk assessment, and component prioritization are intended to address important gaps in the state-of-the-art management and planning for infrastructure systems under natural hazards. Their application can enhance public safety by informing the decision making process for expansion, maintenance, and retrofit actions for infrastructure systems

    The Performability Manager

    Get PDF
    The authors describe the performability manager, a distributed system component that contributes to a more effective and efficient use of system components and prevents quality of service (QoS) degradation. The performability manager dynamically reconfigures distributed systems whenever needed, to recover from failures and to permit the system to evolve over time and include new functionality. Large systems require dynamic reconfiguration to support dynamic change without shutting down the complete system. A distributed system monitor is needed to verify QoS. Monitoring a distributed system is difficult because of synchronization problems and minor differences in clock speeds. The authors describe the functionality and the operation of the performability manager (both informally and formally). Throughout the paper they illustrate the approach by an example distributed application: an ANSAware-based number translation service (NTS), from the intelligent networks (IN) area

    Reliability analysis of distribution systems with photovoltaic generation using a power flow simulator and a parallel Monte Carlo approach

    Get PDF
    This paper presents a Monte Carlo approach for reliability assessment of distribution systems with distributed generation using parallel computing. The calculations are carried out with a royalty-free power flow simulator, OpenDSS (Open Distribution System Simulator). The procedure has been implemented in an environment in which OpenDSS is driven from MATLAB. The test system is an overhead distribution system represented by means of a three-phase model that includes protective devices. The paper details the implemented procedure, which can be applied to systems with or without distributed generation, includes an illustrative case study and summarizes the results derived from the analysis of the test system during one year. The goal is to evaluate the test system performance considering different scenarios with different level of system automation and reconfiguration, and assess the impact that distributed photovoltaic generation can have on that performance. Several reliability indices, including those related to the impact of distributed generation, are obtained for every scenario.Postprint (published version

    Can Component/Service-Based Systems Be Proved Correct?

    Get PDF
    Component-oriented and service-oriented approaches have gained a strong enthusiasm in industries and academia with a particular interest for service-oriented approaches. A component is a software entity with given functionalities, made available by a provider, and used to build other application within which it is integrated. The service concept and its use in web-based application development have a huge impact on reuse practices. Accordingly a considerable part of software architectures is influenced; these architectures are moving towards service-oriented architectures. Therefore applications (re)use services that are available elsewhere and many applications interact, without knowing each other, using services available via service servers and their published interfaces and functionalities. Industries propose, through various consortium, languages, technologies and standards. More academic works are also undertaken concerning semantics and formalisation of components and service-based systems. We consider here both streams of works in order to raise research concerns that will help in building quality software. Are there new challenging problems with respect to service-based software construction? Besides, what are the links and the advances compared to distributed systems?Comment: 16 page

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    A model driven approach for software systems reliability

    Get PDF
    The reliability assurance of software systems from design to deployment level through transformation techniques and model driven approach, is described. Once the reliability mechanisms provided by current component-based development architectures (CBDA) are designed in a platform-independent way, platform-based design and implementation models must be extended. Current CBDAs, such as Enterprise Java Beans, address a considerable range of features to support system reliability. The evaluation aims to test maturity of the approach, its applicability, and the effectiveness of reliability models. The techniques such as process algebras are generally considered time consuming, in regard to software development
    corecore