92,140 research outputs found
Empirical assessment of architecture-based reliability of open-source software
A number of analytical models have been proposed earlier for quantifying software reliability. Some of these models estimate the failure behavior of the software using black-box testing, which treats the software as a monolithic whole. With the evolution of component based software development, the necessity to use white-box testing increased. A few architecture-based reliability models, which use white-box approach, were proposed earlier and they have been validated using several small case studies and proved to be correct. However, there is a dearth of large-scale empirical data used for reliability analysis. This thesis enriches the empirical knowledge in software reliability engineering. We use a real, large-scale case study, GCC compiler, for our experiments. To the best of out knowledge, this is the most comprehensive case study ever used for software reliability analysis. The software is instrumented with a profiler, to extract the execution profiles of the test cases. The execution profiles form the basis for building the operational profile of the system, which describes the software usage. The test case failures are traced back to the faults in the source code to analyze the failure behavior of the components. These results are used to estimate the reliability of the software, as well as the uncertainty in the reliability analysis using entropy
Three-loop Monte Carlo simulation approach to Multi-State Physics Modeling for system reliability assessment
Multi-State Physics Modeling (MSPM) provides a physics-based semi-Markov modeling framework for a more detailed reliability assessment. In this work, a three-loop Monte Carlo (MC) simulation scheme is proposed to operationalize the MSPM approach, quantifying and controlling the uncertainty affecting the system reliability model. The proposed MC simulation scheme involves three steps: (i) the identification of the system components that deserve MSPM, (ii) the quantification of the uncertainties in the MSPM component models and their propagation onto the system-level model, and (iii) the selection of the most suitable modeling alternative that balances the computational demand for the system model solution and the robustness of the system reliability estimates. A Reactor Protection System (RPS) of a Nuclear Power Plant (NPP) is considered as case study for numerical evaluation
Evaluation of elicitation methods to quantify Bayes linear models
The Bayes linear methodology allows decision makers to express their subjective beliefs and adjust these beliefs as observations are made. It is similar in spirit to probabilistic Bayesian approaches, but differs as it uses expectation as its primitive. While substantial work has been carried out in Bayes linear analysis, both in terms of theory development and application, there is little published material on the elicitation of structured expert judgement to quantify models. This paper investigates different methods that could be used by analysts when creating an elicitation process. The theoretical underpinnings of the elicitation methods developed are explored and an evaluation of their use is presented. This work was motivated by, and is a precursor to, an industrial application of Bayes linear modelling of the reliability of defence systems. An illustrative example demonstrates how the methods can be used in practice
Quantifying dependencies for sensitivity analysis with multivariate input sample data
We present a novel method for quantifying dependencies in multivariate
datasets, based on estimating the R\'{e}nyi entropy by minimum spanning trees
(MSTs). The length of the MSTs can be used to order pairs of variables from
strongly to weakly dependent, making it a useful tool for sensitivity analysis
with dependent input variables. It is well-suited for cases where the input
distribution is unknown and only a sample of the inputs is available. We
introduce an estimator to quantify dependency based on the MST length, and
investigate its properties with several numerical examples. To reduce the
computational cost of constructing the exact MST for large datasets, we explore
methods to compute approximations to the exact MST, and find the multilevel
approach introduced recently by Zhong et al. (2015) to be the most accurate. We
apply our proposed method to an artificial testcase based on the Ishigami
function, as well as to a real-world testcase involving sediment transport in
the North Sea. The results are consistent with prior knowledge and heuristic
understanding, as well as with variance-based analysis using Sobol indices in
the case where these indices can be computed
Recommended from our members
Assessing the environmental risks associated with contaminated sites
A risk assessment strategy considering the impact of chemicals on the whole ecosystem has been developed in order to create a sound and useful method for quantifying and comparing global risks posed by the main different hazardous chemicals found in the environment. This index, called the Environmental Risk Index for a Complete Assessment (ERICA), merges in a single number the environmental assessment, the human health risk assessment and the uncertainty caused by missing or unreliable data. ERICA uses a scoring system with parameters for the main characteristics of the pollutants. The main advantage is that it preserves a simple approach by condensing in this single value an analysis of the risk for the area under observation.
The availability and reliability of the data is an important part of the work done to build the index. Experimental and predictive data were compared to evaluate the reliability. Data were derived both from literature sources (experimental models mainly) and predictive models. ERICA can be considered a diagnostic and prognostic tool for environmental contaminants in critical and potentially dangerous sites, such as incinerators, landfills and industrial areas or in broader geographical areas. The application of the proposed integrated index provides a preliminary quantitative analysis of possible environmental alerts due to the presence of one or more pollutants in the investigated site.
This thesis presents the method and the equations behind the index and a first case study based on the Italian legislation and a pilot study on a location on the Italian seacoast
Quantifying Information Leaks Using Reliability Analysis
acmid: 2632367 keywords: Model Counting, Quantitative Information Flow, Reliability Analysis, Symbolic Execution location: San Jose, CA, USA numpages: 4acmid: 2632367 keywords: Model Counting, Quantitative Information Flow, Reliability Analysis, Symbolic Execution location: San Jose, CA, USA numpages: 4acmid: 2632367 keywords: Model Counting, Quantitative Information Flow, Reliability Analysis, Symbolic Execution location: San Jose, CA, USA numpages: 4We report on our work-in-progress into the use of reliability analysis to quantify information leaks. In recent work we have proposed a software reliability analysis technique that uses symbolic execution and model counting to quantify the probability of reaching designated program states, e.g. assert violations, under uncertainty conditions in the environment. The technique has many applications beyond reliability analysis, ranging from program understanding and debugging to analysis of cyber-physical systems. In this paper we report on a novel application of the technique, namely Quantitative Information Flow analysis (QIF). The goal of QIF is to measure information leakage of a program by using information-theoretic metrics such as Shannon entropy or Renyi entropy. We exploit the model counting engine of the reliability analyzer over symbolic program paths, to compute an upper bound of the maximum leakage over all possible distributions of the confidential data. We have implemented our approach into a prototype tool, called QILURA, and explore its effectiveness on a number of case studie
- …