56,247 research outputs found

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Is the Stack Distance Between Test Case and Method Correlated With Test Effectiveness?

    Full text link
    Mutation testing is a means to assess the effectiveness of a test suite and its outcome is considered more meaningful than code coverage metrics. However, despite several optimizations, mutation testing requires a significant computational effort and has not been widely adopted in industry. Therefore, we study in this paper whether test effectiveness can be approximated using a more light-weight approach. We hypothesize that a test case is more likely to detect faults in methods that are close to the test case on the call stack than in methods that the test case accesses indirectly through many other methods. Based on this hypothesis, we propose the minimal stack distance between test case and method as a new test measure, which expresses how close any test case comes to a given method, and study its correlation with test effectiveness. We conducted an empirical study with 21 open-source projects, which comprise in total 1.8 million LOC, and show that a correlation exists between stack distance and test effectiveness. The correlation reaches a strength up to 0.58. We further show that a classifier using the minimal stack distance along with additional easily computable measures can predict the mutation testing result of a method with 92.9% precision and 93.4% recall. Hence, such a classifier can be taken into consideration as a light-weight alternative to mutation testing or as a preceding, less costly step to that.Comment: EASE 201

    A new panel dataset for cross-country analyses of national systems, growth and development (CANA)

    Get PDF
    Missing data represent an important limitation for cross-country analyses of national systems, growth and development. This paper presents a new cross-country panel dataset with no missing value. We make use of a new method of multiple imputation that has recently been developed by Honaker and King (2010) to deal specifically with time-series cross-section data at the country-level. We apply this method to construct a large dataset containing a great number of indicators measuring six key country-specific dimensions: innovation and technological capabilities, education system and human capital, infrastructures, economic competitiveness, political-institutional factors, and social capital. The CANA panel dataset thus obtained provides a rich and complete set of 41 indicators for 134 countries in the period 1980-2008 (for a total of 3886 country-year observations). The empirical analysis shows the reliability of the dataset and its usefulness for cross-country analyses of national systems, growth and development. The new dataset is publicly available.Missing data; multiple imputation methods; national systems of innovation; social capabilities; economic growth and development; composite indicators.

    Inverse Uncertainty Quantification using the Modular Bayesian Approach based on Gaussian Process, Part 2: Application to TRACE

    Full text link
    Inverse Uncertainty Quantification (UQ) is a process to quantify the uncertainties in random input parameters while achieving consistency between code simulations and physical observations. In this paper, we performed inverse UQ using an improved modular Bayesian approach based on Gaussian Process (GP) for TRACE physical model parameters using the BWR Full-size Fine-Mesh Bundle Tests (BFBT) benchmark steady-state void fraction data. The model discrepancy is described with a GP emulator. Numerical tests have demonstrated that such treatment of model discrepancy can avoid over-fitting. Furthermore, we constructed a fast-running and accurate GP emulator to replace TRACE full model during Markov Chain Monte Carlo (MCMC) sampling. The computational cost was demonstrated to be reduced by several orders of magnitude. A sequential approach was also developed for efficient test source allocation (TSA) for inverse UQ and validation. This sequential TSA methodology first selects experimental tests for validation that has a full coverage of the test domain to avoid extrapolation of model discrepancy term when evaluated at input setting of tests for inverse UQ. Then it selects tests that tend to reside in the unfilled zones of the test domain for inverse UQ, so that one can extract the most information for posterior probability distributions of calibration parameters using only a relatively small number of tests. This research addresses the "lack of input uncertainty information" issue for TRACE physical input parameters, which was usually ignored or described using expert opinion or user self-assessment in previous work. The resulting posterior probability distributions of TRACE parameters can be used in future uncertainty, sensitivity and validation studies of TRACE code for nuclear reactor system design and safety analysis

    A new panel dataset for cross-country analyses of national systems, growth and development (CANA)

    Get PDF
    Missing data represent an important limitation for cross-country analyses of national systems, growth and development. This paper presents a new cross-country panel dataset with no missing value. We make use of a new method of multiple imputation that has recently been developed by Honaker and King (2010) to deal specifically with time-series cross-section data at the country-level. We apply this method to construct a large dataset containing a great number of indicators measuring six key country-specific dimensions: innovation and technological capabilities, education system and human capital, infrastructures, economic competitiveness, political-institutional factors, and social capital. The CANA panel dataset thus obtained provides a rich and complete set of 41 indicators for 134 countries in the period 1980-2008 (for a total of 3886 countryyear observations). The empirical analysis shows the reliability of the dataset and its usefulness for crosscountry analyses of national systems, growth and development. The new dataset is publicly available.Los datos incompletos representan una limitación importante para el análisis de sistemas nacionales, crecimiento y desarrollo. Este trabajo presenta un nuevo conjunto de datos de panel completos, en el que se presentan datos observados junto a datos estimados. Las estimaciones han sido realizadas a través de un nuevo método de imputación múltiple recién desarrollado por Honaker y King (2010) para ocuparse específicamente de series cronológicas de sección cruzada a nivel de país. Aplicamos este método para construir un conjunto de datos que contiene un importante número de indicadores para medir seis dimensiones claves específicas de cada país: la capacidad tecnológica y de innovación, el sistema educativo y capital humano, las infraestructuras, la competitividad económica, los factores político-institucionales, y el capital social. El conjunto de datos de panel “CANA” proporciona 41 indicadores para 134 países en el período desde 1980 hasta 2008 (para un total de 3886 observaciones país-año). El análisis empírico muestra la fiabilidad del conjunto de datos y su utilidad para posteriores usos en el estudio de sistemas nacionales, crecimiento y desarrollo. El nuevo conjunto de datos está a disposición del público.datos incompletos, métodos de imputación múltiple, sistemas nacionales de innovación, capacidades sociales, crecimiento económico y desarrollo, indicadores compuestos, Missing data; multiple imputation methods; national systems of innovation; social capabilities; economic growth and development; composite indicators.
    corecore