34,252 research outputs found
An experimental evaluation of software redundancy as a strategy for improving reliability
The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions
Recommended from our members
Evaluation of software dependability
It has been said that the term software engineering is an aspiration not a description. We would like to be able to claim that we engineer software, in the same sense that we engineer an aero-engine, but most of us would agree that this is not currently an accurate description of our activities. My suspicion is that it never will be.
From the point of view of this essay – i.e. dependability evaluation – a major difference between software and other engineering artefacts is that the former is pure design. Its unreliability is always the result of design faults, which in turn arise as a result of human intellectual failures. The unreliability of hardware systems, on the other hand, has tended until recently to be dominated by random physical failures of components – the consequences of the ‘perversity of nature’. Reliability theories have been developed over the years which have successfully allowed systems to be built to high reliability requirements, and the final system reliability to be evaluated accurately. Even for pure hardware systems, without software, however, the very success of these theories has more recently highlighted the importance of design faults in determining the overall reliability of the final product. The conventional hardware reliability theory does not address this problem at all.
In the case of software, there is no physical source of failures, and so none of the reliability theory developed for hardware is relevant. We need new theories that will allow us to achieve required dependability levels, and to evaluate the actual dependability that has been achieved, when the sources of the faults that ultimately result in failure are human intellectual failures
Design diversity: an update from research on reliability modelling
Diversity between redundant subsystems is, in various forms, a common design approach for improving system dependability. Its value in the case of software-based systems is still controversial. This paper gives an overview of reliability modelling work we carried out in recent projects on design diversity, presented in the context of previous knowledge and practice. These results provide additional insight for decisions in applying diversity and in assessing diverseredundant systems. A general observation is that, just as diversity is a very general design approach, the models of diversity can help conceptual understanding of a range of different situations. We summarise results in the general modelling of common-mode failure, in inference from observed failure data, and in decision-making for diversity in development.
N-version Design vs. One Good Version
Evidence indicates that n-version development techniques are more reliable than producing one "good" version-and cost effective in the long run. The author concludes that diverse, independent channels used in parallel are significantly superior to even the current state of the art, especially in situations where cost of failure is high
Recommended from our members
An Empirical Study of the Effectiveness of 'Forcing Diversity' Based on a Large Population of Diverse Programs
Use of diverse software components is a viable defence against common-mode failures in redundant softwarebased systems. Various forms of "Diversity-Seeking Decisions" (“DSDs”) can be applied to the process of developing, or procuring, redundant components, to improve the chances of the resulting components not failing on the same demands. An open question is how effective these decisions, and their combinations, are for achieving large enough reliability gains. Using a large population of software programs, we studied experimentally the effectiveness of specific "DSDs" (and their combinations) mandating differences between redundant components. Some of these combinations produced much better improvements in system probability of failure per demand (PFD) than "uncontrolled" diversity did. Yet, our findings suggest that the gains from such "DSDs" vary significantly between them and between the application problems studied. The relationship between DSDs and system PFD is complex and does not allow for simple universal rules
(e.g. "the more diversity the better") to apply
The impact of diversity upon common mode failures
Recent models for the failure behaviour of systems involving redundancy and diversity have shown that common mode failures can be accounted for in terms of the variability of the failure probability of components over operational environments. Whenever such variability is present, we can expect that the overall system reliability will be less than we could have expected if the components could have been assumed to fail independently. We generalise a model of hardware redundancy due to Hughes [Hughes 1987], and show that with forced diversity, this unwelcome result no longer applies: in fact it becomes theoretically possible to do better than would be the case under independence of failures
Assessing Postural Stability Via the Correlation Patterns of Vertical Ground Reaction Force Components
Background Many methods have been proposed to assess the stability of human postural balance by using a force plate. While most of these approaches characterize postural stability by extracting features from the trajectory of the center of pressure (COP), this work develops stability measures derived from components of the ground reaction force (GRF). Methods In comparison with previous GRF-based approaches that extract stability features from the GRF resultant force, this study proposes three feature sets derived from the correlation patterns among the vertical GRF (VGRF) components. The first and second feature sets quantitatively assess the strength and changing speed of the correlation patterns, respectively. The third feature set is used to quantify the stabilizing effect of the GRF coordination patterns on the COP. Results In addition to experimentally demonstrating the reliability of the proposed features, the efficacy of the proposed features has also been tested by using them to classify two age groups (18–24 and 65–73 years) in quiet standing. The experimental results show that the proposed features are considerably more sensitive to aging than one of the most effective conventional COP features and two recently proposed COM features. Conclusions By extracting information from the correlation patterns of the VGRF components, this study proposes three sets of features to assess human postural stability during quiet standing. As demonstrated by the experimental results, the proposed features are not only robust to inter-trial variability but also more accurate than the tested COP and COM features in classifying the older and younger age groups. An additional advantage of the proposed approach is that it reduces the force sensing requirement from 3D to 1D, substantially reducing the cost of the force plate measurement system
Flexible provisioning of Web service workflows
Web services promise to revolutionise the way computational resources and business processes are offered and invoked in open, distributed systems, such as the Internet. These services are described using machine-readable meta-data, which enables consumer applications to automatically discover and provision suitable services for their workflows at run-time. However, current approaches have typically assumed service descriptions are accurate and deterministic, and so have neglected to account for the fact that services in these open systems are inherently unreliable and uncertain. Specifically, network failures, software bugs and competition for services may regularly lead to execution delays or even service failures. To address this problem, the process of provisioning services needs to be performed in a more flexible manner than has so far been considered, in order to proactively deal with failures and to recover workflows that have partially failed. To this end, we devise and present a heuristic strategy that varies the provisioning of services according to their predicted performance. Using simulation, we then benchmark our algorithm and show that it leads to a 700% improvement in average utility, while successfully completing up to eight times as many workflows as approaches that do not consider service failures
- …