34,252 research outputs found

    An experimental evaluation of software redundancy as a strategy for improving reliability

    Get PDF
    The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions

    Design diversity: an update from research on reliability modelling

    Get PDF
    Diversity between redundant subsystems is, in various forms, a common design approach for improving system dependability. Its value in the case of software-based systems is still controversial. This paper gives an overview of reliability modelling work we carried out in recent projects on design diversity, presented in the context of previous knowledge and practice. These results provide additional insight for decisions in applying diversity and in assessing diverseredundant systems. A general observation is that, just as diversity is a very general design approach, the models of diversity can help conceptual understanding of a range of different situations. We summarise results in the general modelling of common-mode failure, in inference from observed failure data, and in decision-making for diversity in development.

    N-version Design vs. One Good Version

    Get PDF
    Evidence indicates that n-version development techniques are more reliable than producing one "good" version-and cost effective in the long run. The author concludes that diverse, independent channels used in parallel are significantly superior to even the current state of the art, especially in situations where cost of failure is high

    The impact of diversity upon common mode failures

    Get PDF
    Recent models for the failure behaviour of systems involving redundancy and diversity have shown that common mode failures can be accounted for in terms of the variability of the failure probability of components over operational environments. Whenever such variability is present, we can expect that the overall system reliability will be less than we could have expected if the components could have been assumed to fail independently. We generalise a model of hardware redundancy due to Hughes [Hughes 1987], and show that with forced diversity, this unwelcome result no longer applies: in fact it becomes theoretically possible to do better than would be the case under independence of failures

    Assessing Postural Stability Via the Correlation Patterns of Vertical Ground Reaction Force Components

    Get PDF
    Background Many methods have been proposed to assess the stability of human postural balance by using a force plate. While most of these approaches characterize postural stability by extracting features from the trajectory of the center of pressure (COP), this work develops stability measures derived from components of the ground reaction force (GRF). Methods In comparison with previous GRF-based approaches that extract stability features from the GRF resultant force, this study proposes three feature sets derived from the correlation patterns among the vertical GRF (VGRF) components. The first and second feature sets quantitatively assess the strength and changing speed of the correlation patterns, respectively. The third feature set is used to quantify the stabilizing effect of the GRF coordination patterns on the COP. Results In addition to experimentally demonstrating the reliability of the proposed features, the efficacy of the proposed features has also been tested by using them to classify two age groups (18–24 and 65–73 years) in quiet standing. The experimental results show that the proposed features are considerably more sensitive to aging than one of the most effective conventional COP features and two recently proposed COM features. Conclusions By extracting information from the correlation patterns of the VGRF components, this study proposes three sets of features to assess human postural stability during quiet standing. As demonstrated by the experimental results, the proposed features are not only robust to inter-trial variability but also more accurate than the tested COP and COM features in classifying the older and younger age groups. An additional advantage of the proposed approach is that it reduces the force sensing requirement from 3D to 1D, substantially reducing the cost of the force plate measurement system

    Flexible provisioning of Web service workflows

    No full text
    Web services promise to revolutionise the way computational resources and business processes are offered and invoked in open, distributed systems, such as the Internet. These services are described using machine-readable meta-data, which enables consumer applications to automatically discover and provision suitable services for their workflows at run-time. However, current approaches have typically assumed service descriptions are accurate and deterministic, and so have neglected to account for the fact that services in these open systems are inherently unreliable and uncertain. Specifically, network failures, software bugs and competition for services may regularly lead to execution delays or even service failures. To address this problem, the process of provisioning services needs to be performed in a more flexible manner than has so far been considered, in order to proactively deal with failures and to recover workflows that have partially failed. To this end, we devise and present a heuristic strategy that varies the provisioning of services according to their predicted performance. Using simulation, we then benchmark our algorithm and show that it leads to a 700% improvement in average utility, while successfully completing up to eight times as many workflows as approaches that do not consider service failures
    corecore