471,245 research outputs found

    Cross-layer system reliability assessment framework for hardware faults

    Get PDF
    System reliability estimation during early design phases facilitates informed decisions for the integration of effective protection mechanisms against different classes of hardware faults. When not all system abstraction layers (technology, circuit, microarchitecture, software) are factored in such an estimation model, the delivered reliability reports must be excessively pessimistic and thus lead to unacceptably expensive, over-designed systems. We propose a scalable, cross-layer methodology and supporting suite of tools for accurate but fast estimations of computing systems reliability. The backbone of the methodology is a component-based Bayesian model, which effectively calculates system reliability based on the masking probabilities of individual hardware and software components considering their complex interactions. Our detailed experimental evaluation for different technologies, microarchitectures, and benchmarks demonstrates that the proposed model delivers very accurate reliability estimations (FIT rates) compared to statistically significant but slow fault injection campaigns at the microarchitecture level.Peer ReviewedPostprint (author's final draft

    A Model to Estimate First-Order Mutation Coverage from Higher-Order Mutation Coverage

    Full text link
    The test suite is essential for fault detection during software development. First-order mutation coverage is an accurate metric to quantify the quality of the test suite. However, it is computationally expensive. Hence, the adoption of this metric is limited. In this study, we address this issue by proposing a realistic model able to estimate first-order mutation coverage using only higher-order mutation coverage. Our study shows how the estimation evolves along with the order of mutation. We validate the model with an empirical study based on 17 open-source projects.Comment: 2016 IEEE International Conference on Software Quality, Reliability, and Security. 9 page

    Evaluating testing methods by delivered reliability

    Get PDF
    There are two main goals in testing software: (1) to achieve adequate quality (debug testing), where the objective is to probe the software for defects so that these can be removed, and (2) to assess existing quality (operational testing), where the objective is to gain confidence that the software is reliable. Debug methods tend to ignore random selection of test data from an operational profile, while for operational methods this selection is all-important. Debug methods are thought to be good at uncovering defects so that these can be repaired, but having done so they do not provide a technically defensible assessment of the reliability that results. On the other hand, operational methods provide accurate assessment, but may not be as useful for achieving reliability. This paper examines the relationship between the two testing goals, using a probabilistic analysis. We define simple models of programs and their testing, and try to answer the question of how to attain program reliability: is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing? Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The “better” method delivers higher reliability after all test failures have been eliminated. Special cases are exhibited in which each kind of testing is superior. An analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons

    Software quality and reliability prediction using Dempster -Shafer theory

    Get PDF
    As software systems are increasingly deployed in mission critical applications, accurate quality and reliability predictions are becoming a necessity. Most accurate prediction models require extensive testing effort, implying increased cost and slowing down the development life cycle. We developed two novel statistical models based on Dempster-Shafer theory, which provide accurate predictions from relatively small data sets of direct and indirect software reliability and quality predictors. The models are flexible enough to incorporate information generated throughout the development life-cycle to improve the prediction accuracy.;Our first contribution is an original algorithm for building Dempster-Shafer Belief Networks using prediction logic. This model has been applied to software quality prediction. We demonstrated that the prediction accuracy of Dempster-Shafer Belief Networks is higher than that achieved by logistic regression, discriminant analysis, random forests, as well as the algorithms in two machine learning software packages, See5 and WEKA. The difference in the performance of the Dempster-Shafer Belief Networks over the other methods is statistically significant.;Our second contribution is also based on a practical extension of Dempster-Shafer theory. The major limitation of the Dempsters rule and other known rules of evidence combination is the inability to handle information coming from correlated sources. Motivated by inherently high correlations between early life-cycle predictors of software reliability, we extended Murphy\u27s rule of combination to account for these correlations. When used as a part of the methodology that fuses various software reliability prediction systems, this rule provided more accurate predictions than previously reported methods. In addition, we proposed an algorithm, which defines the upper and lower bounds of the belief function of the combination results. To demonstrate its generality, we successfully applied it in the design of the Online Safety Monitor, which fuses multiple correlated time varying estimations of convergence of neural network learning in an intelligent flight control system

    Predicting Software Reliability Using Ant Colony Optimization Technique with Travelling Salesman Problem for Software Process – A Literature Survey

    Get PDF
    Computer software has become an essential and important foundation in several versatile domains including medicine, engineering, etc. Consequently, with such widespread application of software, there is a need of ensuring software reliability and quality. In order to measure such software reliability and quality, one must wait until the software is implemented, tested and put for usage for a certain time period. Several software metrics have been proposed in the literature to avoid this lengthy and costly process, and they proved to be a good means of estimating software reliability. For this purpose, software reliability prediction models are built. Software reliability is one of the important software quality features. Software reliability is defined as the probability with which the software will operate without any failure for a specific period of time in a specified environment. Software reliability, when estimated in early phases of software development life cycle, saves lot of money and time as it prevents spending huge amount of money on fixing of defects in the software after it has been deployed to the client. Software reliability prediction is very challenging in starting phases of life cycle model. Software reliability estimation has thus become an important research area as every organization aims to produce reliable software, with good quality and error or defect free software. There are many software reliability growth models that are used to assess or predict the reliability of the software. These models help in developing robust and fault tolerant systems. In the past few years many software reliability models have been proposed for assessing reliability of software but developing accurate reliability prediction models is difficult due to the recurrent or frequent changes in data in the domain of software engineering. As a result, the software reliability prediction models built on one dataset show a significant decrease in their accuracy when they are used with new data. The main aim of this paper is to introduce a new approach that optimizes the accuracy of software reliability predictive models when used with raw data. Ant Colony Optimization Technique (ACOT) is proposed to predict software reliability based on data collected from literature. An ant colony system by combining with Travelling Sales Problem (TSP) algorithm has been used, which has been changed by implementing different algorithms and extra functionality, in an attempt to achieve better software reliability results with new data for software process. The intellectual behavior of the ant colony framework by means of a colony of cooperating artificial ants are resulting in very promising results. Keywords: Software Reliability, Reliability predictive Models, Bio-inspired Computing, Ant Colony Optimization technique, Ant Colon

    Using Neural Networks for Estimating Cruise Missile Reliability

    Get PDF
    ACC believes its current methodology for predicting the reliability of its Air Launched Cruise Missile (ALCM) and Advanced Cruise Missile (ACM) stockpiles could be improved. They require a predictive model that delivers the best possible 24-month projection of cruise missile reliability using existing data sources, collection methods and software. It should be easily maintainable and developed to allow a layperson to enter updated data and receive an accurate reliability prediction. The focus of this thesis is to improve upon free flight reliability, although the techniques could also be applied to the captive carry portion of the missile reliability equation. The following steps were taken to ensure maximum accuracy in model results. 1. Add more detail to flight test reliability calculation. 2. Convert the ground test data into a usable form (reduce). 3, Engage in an exercise in feature selection. 4. Develop a Matlab model prototype. 5. Validate the model via problems with known solutions. 6. Apply an appropriate data fusion technique to the different network outputs (logistic regression, feed-forward and radial basis function). 7. Put the model into the form of a usable tool for the end-user, The end product is the ALCM/ACM Reliability Estimation System (AARES), a VEA-based model that meets all user criteria

    Reliability studies on the influence of joint clearance on the kinematics of the nose landing gear mechanism of a transport aircraft using contact theory

    Get PDF
    Contact between two objects is an important facet in multibody dynamics. It is a discontinuous, non-linear phenomenon and consequently it requires iterative simulations. The paper presents the reliability evaluation of the retraction landing gear mechanism by three contact models Viz. Impact Function Model, Coefficient of Restitution Model and Clearance Link Model. The simulations have been performed using the standard commercial multibody dynamics software ADAMS. The precision of these simulations depends on user-defined parameters like stiffness, Damping, Penetration Depth, Force exponent, Penalty and Restitution Coefficient that impacts the overall reliability of the mechanism. The optimal value of these parameters have been obtained by an optimization process using Design of Experiments tool available in ADAMS to match with the nominal values without any clearance.. The overall reliability of the mechanism has been evaluated at different instants of the retraction cycle by using Response Surface Based Monte Carlo Simulation and Direct Monte Carlo Simulation by using in house codes created in MATLAB software. The comparison, significance and accuracy of the results obtained using the above -mentioned approaches has been discussed and the impact based contact modelling for the clearance appears to be accurate and realistic for practical applications

    Estimation of system reliability using a semiparametric model

    Get PDF
    An important problem in reliability engineering is to predict the failure rate, that is, the frequency with which an engineered system or component fails. This paper presents a new method of estimating failure rate using a semiparametric model with Gaussian process smoothing. The method is able to provide accurate estimation based on historical data and it does not make strong a priori assumptions of failure rate pattern (e.g., constant or monotonic). Our experiments of applying this method in power system failure data compared with other models show its efficacy and accuracy. This method can be used in estimating reliability for many other systems, such as software systems or components
    corecore