86,993 research outputs found
Empirical assessment of architecture-based reliability of open-source software
A number of analytical models have been proposed earlier for quantifying software reliability. Some of these models estimate the failure behavior of the software using black-box testing, which treats the software as a monolithic whole. With the evolution of component based software development, the necessity to use white-box testing increased. A few architecture-based reliability models, which use white-box approach, were proposed earlier and they have been validated using several small case studies and proved to be correct. However, there is a dearth of large-scale empirical data used for reliability analysis. This thesis enriches the empirical knowledge in software reliability engineering. We use a real, large-scale case study, GCC compiler, for our experiments. To the best of out knowledge, this is the most comprehensive case study ever used for software reliability analysis. The software is instrumented with a profiler, to extract the execution profiles of the test cases. The execution profiles form the basis for building the operational profile of the system, which describes the software usage. The test case failures are traced back to the faults in the source code to analyze the failure behavior of the components. These results are used to estimate the reliability of the software, as well as the uncertainty in the reliability analysis using entropy
Reliability demonstration for safety-critical systems
This paper suggests a new model for reliability demonstration of safety-critical systems, based on the TRW Software Reliability Theory. The paper describes the model; the test equipment required and test strategies based on the various constraints occurring during software development. The paper also compares a new testing method, Single Risk Sequential Testing (SRST), with the standard Probability Ratio Sequential Testing method (PRST), and concludes that: • SRST provides higher chances of success than PRST • SRST takes less time to complete than PRST • SRST satisfies the consumer risk criterion, whereas PRST provides a much smaller consumer risk than the requirement
Robust Dynamic Selection of Tested Modules in Software Testing for Maximizing Delivered Reliability
Software testing is aimed to improve the delivered reliability of the users.
Delivered reliability is the reliability of using the software after it is
delivered to the users. Usually the software consists of many modules. Thus,
the delivered reliability is dependent on the operational profile which
specifies how the users will use these modules as well as the defect number
remaining in each module. Therefore, a good testing policy should take the
operational profile into account and dynamically select tested modules
according to the current state of the software during the testing process. This
paper discusses how to dynamically select tested modules in order to maximize
delivered reliability by formulating the selection problem as a dynamic
programming problem. As the testing process is performed only once, risk must
be considered during the testing process, which is described by the tester's
utility function in this paper. Besides, since usually the tester has no
accurate estimate of the operational profile, by employing robust optimization
technique, we analysis the selection problem in the worst case, given the
uncertainty set of operational profile. By numerical examples, we show the
necessity of maximizing delivered reliability directly and using robust
optimization technique when the tester has no clear idea of the operational
profile. Moreover, it is shown that the risk averse behavior of the tester has
a major influence on the delivered reliability.Comment: 19 pages, 4 figure
Recommended from our members
Using a Log-normal Failure Rate Distribution for Worst Case Bound Reliability Prediction
Prior research has suggested that the failure rates of faults follow a log normal distribution. We propose a specific model where distributions close to a log normal arise naturally from the program structure. The log normal distribution presents a problem when used in reliability growth models as it is not mathematically tractable. However we demonstrate that a worst case bound can be estimated that is less pessimistic than our earlier worst case bound theory
Recommended from our members
On the use of testability measures for dependability assessment
Program “testability” is informally, the probability that a program will fail under test if it contains at least one fault. When a dependability assessment has to be derived from the observation of a series of failure free test executions (a common need for software subject to “ultra high reliability” requirements), measures of testability can-in theory-be used to draw inferences on program correctness. We rigorously investigate the concept of testability and its use in dependability assessment, criticizing, and improving on, previously published results. We give a general descriptive model of program execution and testing, on which the different measures of interest can be defined. We propose a more precise definition of program testability than that given by other authors, and discuss how to increase testing effectiveness without impairing program reliability in operation. We then study the mathematics of using testability to estimate, from test results: the probability of program correctness and the probability of failures. To derive the probability of program correctness, we use a Bayesian inference procedure and argue that this is more useful than deriving a classical “confidence level”. We also show that a high testability is not an unconditionally desirable property for a program. In particular, for programs complex enough that they are unlikely to be completely fault free, increasing testability may produce a program which will be less trustworthy, even after successful testin
Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results
Testability is defined as the probability that a program will fail a test, conditional on the program containing some fault. In this paper, we show that statements about the testability of a program can be more simply described in terms of assumptions on the probability distribution of the failure intensity of the program. We can thus state general acceptance conditions in clear mathematical terms using Bayesian inference. We develop two scenarios, one for software for which the reliability requirements are that the software must be completely fault-free, and another for requirements stated as an upper bound on the acceptable failure probability
An overview on the obsolescence of physical assets for the defence facing the challenges of industry 4.0 and the new operating environments
Libro en Open AccessThis contribution is intended to observe special features presented in physical assets for
defence. Particularly, the management of defence assets has to consider not only the reliability, availability,
maintainability and other factors frequently used in asset management. On the contrary, such systems
should also take into account their adaptation to changing operating environments as well as their capability
to changes on the technological context. This study approaches to the current real situation where, due
to the diversity of conflicts in our international context, the same type of defence systems must be able
to provide services under different boundary conditions in different areas of the globe. At the same time,
new concepts from the Industry 4.0 provide quick changes that should be considered along the life cycle
of a defence asset. As a finding or consequence, these variations in operating conditions and in technology
may accelerate asset degradation by modifying its reliability, its up-to-date status and, in general terms, its
end-of-life estimation, depending of course on a diversity of factors. This accelerated deterioration of the
asset is often known as “obsolescence” and its implications are often evaluated (when possible), in terms
of costs from different natures. The originality of this contribution is the introduction of a discussion on
how a proper analysis may help to reduce errors and mistakes in the decision-making process regarding the
suitability or not of repairing, replacing, or modernizing the asset or system under study. In other words,
the obsolescence analysis, from a reliability and technological point of view, could be used to determine the
conservation or not of a specific asset fleet, in order to understand the effects of operational and technology
factors variation over the functionality and life cycle cost of physical assets for defence
- …