121 research outputs found

    Performability: a retrospective and some pointers to the future

    Full text link
    As computing and communication systems become physically and logically more complex, their evaluation calls for continued innovation with regard to measure definition, model construction/solution, and tool development. In particular, the performance of such systems is often degradable, i.e., internal or external faults can reduce the quality of a delivered service even though that service, according to its specification, remains proper (failure-free). The need to accommodate this property, using model-based evaluation methods, was the raison d'etre for the concept of performability. To set the stage for additional progress in its development, we present a retrospective of associated theory, techniques, and applications resulting from work in this area over the past decade and a half. Based on what has been learned, some pointers are made to future directions which might further enhance the effectiveness of these methods and broaden their scope of applicability.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/30223/1/0000615.pd

    Adaptive Fault Tolerance and Graceful Degradation Under Dynamic Hard Real-time Scheduling

    Get PDF
    Static redundancy allocation is inappropriate in hard realtime systems that operate in variable and dynamic environments, (e.g., radar tracking, avionics). Adaptive Fault Tolerance (AFT) can assure adequate reliability of critical modules, under temporal and resources constraints, by allocating just as much redundancy to less critical modules as can be afforded, thus gracefully reducing their resource requirement. In this paper, we propose a mechanism for supporting adaptive fault tolerance in a real-time system. Adaptation is achieved by choosing a suitable redundancy strategy for a dynamically arriving computation to assure required reliability and to maximize the potential for fault tolerance while ensuring that deadlines are met. The proposed approach is evaluated using a real-life workload simulating radar tracking software in AWACS early warning aircraft. The results demonstrate that our technique outperforms static fault tolerance strategies in terms of tasks meeting their timing constraints. Further, we show that the gain in this timing-centric performance metric does not reduce the fault tolerance of the executing tasks below a predefined minimum level. Overall, the evaluation indicates that the proposed ideas result in a system that dynamically provides QOS guarantees along the fault-tolerance dimension

    Early Dependability Analysis of FPGA-Based Space Applications Using Formal Verification

    Get PDF
    SRAM-based FPGAs are increasingly attractive in the aerospace industry for their field programmability and low cost. Unfortunately, they suffer from cosmic radiation induced Single Event Effects (SEEs). In safety-critical applications, the dependability of the design is a prime concern since failures may have catastrophic consequences. Hence, an early analysis of dependability of such safety-critical applications will enable designers to develop systems that meet high dependability requirements, such as the DO-254 standard. In this thesis, we propose a high-level dependability and performability analysis methodology based on probabilistic model checking. Compared to the pen-and-pencil and discrete-event simulation approach, our methodology is more accurate due to the use of an automated formal verification technique. Moreover, compared to fault injection or beam testing, analysis at early design stages can guide designers to build more reliable designs reducing the overall cost and effort. The proposed methodology can perform three different types of analysis: evaluation of available design options, optimization of scrub intervals while satisfying its design assurance level requirements, and optimal partitioning of Triple-Modular Redundant (TMR) Systems. Such analysis can also guide designers to adopt proper mitigation technique(s), such as rescheduling, TMR, TMR with less frequent scrubs, or even can help to decide the number of TMR partitions for a given scrub intervals. Starting from a high-level description of a system, based on the preferred analysis, a Markov model or Markov (reward) model is constructed from the extracted Control Data Flow Graph (CDFG) and the failure/mitigation parameters for the targeted FPGA. Such modeling and exhaustive analysis elaborated using a probabilistic model checking technique can capture all the failures and repairs possible (according to some general model) in the system within the radiation environment. To illustrate the applicability of the proposed approach, we present our quantitative analysis obtained from DSP benchmark circuits

    A formalism for describing and simulating systems with interacting components.

    Get PDF
    This thesis addresses the problem of descriptive complexity presented by systems involving a high number of interacting components. It investigates the evaluation measure of performability and its application to such systems. A new description and simulation language, ICE and it's application to performability modelling is presented. ICE (Interacting ComponEnts) is based upon an earlier description language which was first proposed for defining reliability problems. ICE is declarative in style and has a limited number of keywords. The ethos in the development of the language has been to provide an intuitive formalism with a powerful descriptive space. The full syntax of the language is presented with discussion as to its philosophy. The implementation of a discrete event simulator using an ICE interface is described, with use being made of examples to illustrate the functionality of the code and the semantics of the language. Random numbers are used to provide the required stochastic behaviour within the simulator. The behaviour of an industry standard generator within the simulator and different methods of number allocation are shown. A new generator is proposed that is a development of a fast hardware shift register generator and is demonstrated to possess good statistical properties and operational speed. For the purpose of providing a rigorous description of the language and clarification of its semantics, a computational model is developed using the formalism of extended coloured Petri nets. This model also gives an indication of the language's descriptive power relative to that of a recognised and well developed technique. Some recognised temporal and structural problems of system event modelling are identified. and ICE solutions given. The growing research area of ATM communication networks is introduced and a sophisticated top down model of an ATM switch presented. This model is simulated and interesting results are given. A generic ICE framework for performability modelling is developed and demonstrated. This is considered as a positive contribution to the general field of performability research

    Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    Get PDF
    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure

    Autonomous spacecraft maintenance study group

    Get PDF
    A plan to incorporate autonomous spacecraft maintenance (ASM) capabilities into Air Force spacecraft by 1989 is outlined. It includes the successful operation of the spacecraft without ground operator intervention for extended periods of time. Mechanisms, along with a fault tolerant data processing system (including a nonvolatile backup memory) and an autonomous navigation capability, are needed to replace the routine servicing that is presently performed by the ground system. The state of the art fault handling capabilities of various spacecraft and computers are described, and a set conceptual design requirements needed to achieve ASM is established. Implementations for near term technology development needed for an ASM proof of concept demonstration by 1985, and a research agenda addressing long range academic research for an advanced ASM system for 1990s are established

    Survivability modeling for cyber-physical systems subject to data corruption

    Get PDF
    Cyber-physical critical infrastructures are created when traditional physical infrastructure is supplemented with advanced monitoring, control, computing, and communication capability. More intelligent decision support and improved efficacy, dependability, and security are expected. Quantitative models and evaluation methods are required for determining the extent to which a cyber-physical infrastructure improves on its physical predecessors. It is essential that these models reflect both cyber and physical aspects of operation and failure. In this dissertation, we propose quantitative models for dependability attributes, in particular, survivability, of cyber-physical systems. Any malfunction or security breach, whether cyber or physical, that causes the system operation to depart from specifications will affect these dependability attributes. Our focus is on data corruption, which compromises decision support -- the fundamental role played by cyber infrastructure. The first research contribution of this work is a Petri net model for information exchange in cyber-physical systems, which facilitates i) evaluation of the extent of data corruption at a given time, and ii) illuminates the service degradation caused by propagation of corrupt data through the cyber infrastructure. In the second research contribution, we propose metrics and an evaluation method for survivability, which captures the extent of functionality retained by a system after a disruptive event. We illustrate the application of our methods through case studies on smart grids, intelligent water distribution networks, and intelligent transportation systems. Data, cyber infrastructure, and intelligent control are part and parcel of nearly every critical infrastructure that underpins daily life in developed countries. Our work provides means for quantifying and predicting the service degradation caused when cyber infrastructure fails to serve its intended purpose. It can also serve as the foundation for efforts to fortify critical systems and mitigate inevitable failures --Abstract, page iii

    Modelling and Design of Resilient Networks under Challenges

    Get PDF
    Communication networks, in particular the Internet, face a variety of challenges that can disrupt our daily lives resulting in the loss of human lives and significant financial costs in the worst cases. We define challenges as external events that trigger faults that eventually result in service failures. Understanding these challenges accordingly is essential for improvement of the current networks and for designing Future Internet architectures. This dissertation presents a taxonomy of challenges that can help evaluate design choices for the current and Future Internet. Graph models to analyse critical infrastructures are examined and a multilevel graph model is developed to study interdependencies between different networks. Furthermore, graph-theoretic heuristic optimisation algorithms are developed. These heuristic algorithms add links to increase the resilience of networks in the least costly manner and they are computationally less expensive than an exhaustive search algorithm. The performance of networks under random failures, targeted attacks, and correlated area-based challenges are evaluated by the challenge simulation module that we developed. The GpENI Future Internet testbed is used to conduct experiments to evaluate the performance of the heuristic algorithms developed
    • …
    corecore