58,775 research outputs found

    Model reduction for analysis of cascading failures in power systems

    Get PDF
    In this paper, we apply a principal-orthogonal decomposition based method to the model reduction of a hybrid, nonlinear model of a power network. The results demonstrate that the sequence of fault events can be evaluated and predicted without necessarily simulating the whole system

    The failure tolerance of mechatronic software systems to random and targeted attacks

    Full text link
    This paper describes a complex networks approach to study the failure tolerance of mechatronic software systems under various types of hardware and/or software failures. We produce synthetic system architectures based on evidence of modular and hierarchical modular product architectures and known motifs for the interconnection of physical components to software. The system architectures are then subject to various forms of attack. The attacks simulate failure of critical hardware or software. Four types of attack are investigated: degree centrality, betweenness centrality, closeness centrality and random attack. Failure tolerance of the system is measured by a 'robustness coefficient', a topological 'size' metric of the connectedness of the attacked network. We find that the betweenness centrality attack results in the most significant reduction in the robustness coefficient, confirming betweenness centrality, rather than the number of connections (i.e. degree), as the most conservative metric of component importance. A counter-intuitive finding is that "designed" system architectures, including a bus, ring, and star architecture, are not significantly more failure-tolerant than interconnections with no prescribed architecture, that is, a random architecture. Our research provides a data-driven approach to engineer the architecture of mechatronic software systems for failure tolerance.Comment: Proceedings of the 2013 ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2013 August 4-7, 2013, Portland, Oregon, USA (In Print

    Planning and managing the cost of compromise for AV retention and access

    No full text
    Long-term retention and access to audiovisual (AV) assets as part of a preservation strategy inevitably involve some form of compromise in order to achieve acceptable levels of cost, throughput, quality, and many other parameters. Examples include quality control and throughput in media transfer chains; data safety and accessibility in digital storage systems; and service levels for ingest and access for archive functions delivered as services. We present new software tools and frameworks developed in the PrestoPRIME project that allow these compromises to be quantitatively assessed, planned, and managed for file-based AV assets. Our focus is how to give an archive an assurance that when they design and operate a preservation strategy as a set of services, it will function as expected and will cope with the inevitable and often unpredictable variations that happen in operation. This includes being able to do cost projections, sensitivity analysis, simulation of “disaster scenarios,” and to govern preservation services using service-level agreements and policies

    Simulating Wde-area Replication

    Get PDF
    We describe our experiences with simulating replication algorithms for use in far flung distributed systems. The algorithms under scrutiny mimic epidemics. Epidemic algorithms seem to scale and adapt to change (such as varying replica sets) well. The loose consistency guarantees they make seem more useful in applications where availability strongly outweighs correctness; e.g., distributed name service

    An Interaction Model for Simulation and Mitigation of Cascading Failures

    Full text link
    In this paper the interactions between component failures are quantified and the interaction matrix and interaction network are obtained. The quantified interactions can capture the general propagation patterns of the cascades from utilities or simulation, thus helping to better understand how cascading failures propagate and to identify key links and key components that are crucial for cascading failure propagation. By utilizing these interactions a high-level probabilistic model called interaction model is proposed to study the influence of interactions on cascading failure risk and to support online decision-making. It is much more time efficient to first quantify the interactions between component failures with fewer original cascades from a more detailed cascading failure model and then perform the interaction model simulation than it is to directly simulate a large number of cascades with a more detailed model. Interaction-based mitigation measures are suggested to mitigate cascading failure risk by weakening key links, which can be achieved in real systems by wide area protection such as blocking of some specific protective relays. The proposed interaction quantifying method and interaction model are validated with line outage data generated by the AC OPA cascading simulations on the IEEE 118-bus system.Comment: Accepted by IEEE Transactions on Power System
    • …
    corecore