1,178 research outputs found

    Prediction intervals for reliability growth models with small sample sizes

    Get PDF
    Engineers and practitioners contribute to society through their ability to apply basic scientific principles to real problems in an effective and efficient manner. They must collect data to test their products every day as part of the design and testing process and also after the product or process has been rolled out to monitor its effectiveness. Model building, data collection, data analysis and data interpretation form the core of sound engineering practice.After the data has been gathered the engineer must be able to sift them and interpret them correctly so that meaning can be exposed from a mass of undifferentiated numbers or facts. To do this he or she must be familiar with the fundamental concepts of correlation, uncertainty, variability and risk in the face of uncertainty. In today's global and highly competitive environment, continuous improvement in the processes and products of any field of engineering is essential for survival. Many organisations have shown that the first step to continuous improvement is to integrate the widespread use of statistics and basic data analysis into the manufacturing development process as well as into the day-to-day business decisions taken in regard to engineering processes.The Springer Handbook of Engineering Statistics gathers together the full range of statistical techniques required by engineers from all fields to gain sensible statistical feedback on how their processes or products are functioning and to give them realistic predictions of how these could be improved

    Operational Calibration: Debugging Confidence Errors for DNNs in the Field

    Full text link
    Trained DNN models are increasingly adopted as integral parts of software systems, but they often perform deficiently in the field. A particularly damaging problem is that DNN models often give false predictions with high confidence, due to the unavoidable slight divergences between operation data and training data. To minimize the loss caused by inaccurate confidence, operational calibration, i.e., calibrating the confidence function of a DNN classifier against its operation domain, becomes a necessary debugging step in the engineering of the whole system. Operational calibration is difficult considering the limited budget of labeling operation data and the weak interpretability of DNN models. We propose a Bayesian approach to operational calibration that gradually corrects the confidence given by the model under calibration with a small number of labeled operation data deliberately selected from a larger set of unlabeled operation data. The approach is made effective and efficient by leveraging the locality of the learned representation of the DNN model and modeling the calibration as Gaussian Process Regression. Comprehensive experiments with various practical datasets and DNN models show that it significantly outperformed alternative methods, and in some difficult tasks it eliminated about 71% to 97% high-confidence (>0.9) errors with only about 10\% of the minimal amount of labeled operation data needed for practical learning techniques to barely work.Comment: Published in the Proceedings of the 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2020

    Simulation of Reliability of Software Component

    Get PDF
    Component-Based Software Engineering (CBSE) is increasingly being accepted worldwide for software development, in most of the industries. Software reliability is defined as the probability that a software system operates with no failure within a specified time on specified operating conditions. Software component reliability and failure intensity are two important parameters that Estimates the reliability of system after integration of component. The estimation of reliability of software can save loss of time, life and cost. In this paper, software reliability has been estimated by analyzing the failure data. The Imperfect Software Reliability Growth Models (SRGMs) model have been used for simulating the software reliability by estimating the number of remaining faults and the model parameters of the fault content rate function. We aim for simulating software reliability by connecting the imperfect debugging and Goel-Okumoto model. The estimation of reliability gives the time of stopping the unending testing of that component or time of release of software component
    • …
    corecore