58,438 research outputs found

    Analysis of Errors in Software Reliability Prediction Systems and Application of Model Uncertainty Theory to Provide Better Predictions

    Get PDF
    Models are the medium by which we reflect and express our understanding of some aspect of reality, a particular unknown of interest. As it is virtually impossible to grasp any situation in its entire complexity, models are representations of reality that are always partial resulting in a state of uncertainty or error. However the question of model error from a pragmatic point of view is not one of accounting for the difference between models and reality at a fundamental level, as such difference always exists. Rather the question is whether the prediction or performance of the model is correct at some practically acceptable level, within the model's domain of application. Here lays the importance of assessing the impact of uncertainties about predictions of a model, modeling the error and trying to reduce the uncertainties associated as much as possible to provide better estimations. While the methods for assessing the impact of errors on the performance of a model and error modeling are well established in various scientific and engineering disciplines, to the best of our knowledge no substantial work has been done in the field of Software Reliability Modeling despite the fact that the inadequacy of the present state and techniques of software reliability estimation has been recognized by industry and government agencies. In summary, even though hundreds of software reliability models have been developed, the software reliability discipline is still struggling to establish a software reliability prediction framework. This work intends to improve the performance of software reliability models through error modeling. It analyzes the errors associated with a set of five software Reliability Prediction Systems (RePSs) and attempts to improve their prediction accuracy using a model uncertainty framework. In the process, this work also statistically validates the performances of the RePSs. It also provides a time and cost effective alternative to performing experiments that are required to assess the error form which is integral to the process of application of the model uncertainty framework

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Post-Band Merge Utilities Applied to Spitzer Pleiades Data

    Get PDF
    Band merging extracted point sources observed in multiple wavelength bands is generally done purely on the basis of positional information in order to avoid photometric biases. Automated merge decisions can be more optimal with better position estimation and more realistic modeling of positional estimation errors. Unfortunately, extraction software often does not provide the most accurate positional information possible, and so post-band merge utilities have been developed and implemented to refine both the source positions and the error modeling. Subsequent band merging of the refined detections improves the completeness and reliability of the multi-band source catalog. Application to Spitzer Space Telescope mapping observations of the Pleiades star cluster demonstrates some aspects of the improved band merging

    Mechanism Deduction from Noisy Chemical Reaction Networks

    Full text link
    We introduce KiNetX, a fully automated meta-algorithm for the kinetic analysis of complex chemical reaction networks derived from semi-accurate but efficient electronic structure calculations. It is designed to (i) accelerate the automated exploration of such networks, and (ii) cope with model-inherent errors in electronic structure calculations on elementary reaction steps. We developed and implemented KiNetX to possess three features. First, KiNetX evaluates the kinetic relevance of every species in a (yet incomplete) reaction network to confine the search for new elementary reaction steps only to those species that are considered possibly relevant. Second, KiNetX identifies and eliminates all kinetically irrelevant species and elementary reactions to reduce a complex network graph to a comprehensible mechanism. Third, KiNetX estimates the sensitivity of species concentrations toward changes in individual rate constants (derived from relative free energies), which allows us to systematically select the most efficient electronic structure model for each elementary reaction given a predefined accuracy. The novelty of KiNetX consists in the rigorous propagation of correlated free-energy uncertainty through all steps of our kinetic analyis. To examine the performance of KiNetX, we developed AutoNetGen. It semirandomly generates chemistry-mimicking reaction networks by encoding chemical logic into their underlying graph structure. AutoNetGen allows us to consider a vast number of distinct chemistry-like scenarios and, hence, to discuss assess the importance of rigorous uncertainty propagation in a statistical context. Our results reveal that KiNetX reliably supports the deduction of product ratios, dominant reaction pathways, and possibly other network properties from semi-accurate electronic structure data.Comment: 36 pages, 4 figures, 2 table

    Open TURNS: An industrial software for uncertainty quantification in simulation

    Full text link
    The needs to assess robust performances for complex systems and to answer tighter regulatory processes (security, safety, environmental control, and health impacts, etc.) have led to the emergence of a new industrial simulation challenge: to take uncertainties into account when dealing with complex numerical simulation frameworks. Therefore, a generic methodology has emerged from the joint effort of several industrial companies and academic institutions. EDF R&D, Airbus Group and Phimeca Engineering started a collaboration at the beginning of 2005, joined by IMACS in 2014, for the development of an Open Source software platform dedicated to uncertainty propagation by probabilistic methods, named OpenTURNS for Open source Treatment of Uncertainty, Risk 'N Statistics. OpenTURNS addresses the specific industrial challenges attached to uncertainties, which are transparency, genericity, modularity and multi-accessibility. This paper focuses on OpenTURNS and presents its main features: openTURNS is an open source software under the LGPL license, that presents itself as a C++ library and a Python TUI, and which works under Linux and Windows environment. All the methodological tools are described in the different sections of this paper: uncertainty quantification, uncertainty propagation, sensitivity analysis and metamodeling. A section also explains the generic wrappers way to link openTURNS to any external code. The paper illustrates as much as possible the methodological tools on an educational example that simulates the height of a river and compares it to the height of a dyke that protects industrial facilities. At last, it gives an overview of the main developments planned for the next few years

    Selection and Validation of Health Indicators in Prognostics and Health Management System Design

    Get PDF
    Health Monitoring is the science of system health status evaluation. In the modern industrial world, it is getting more and more importance because it is a powerful tool to increase systems dependability. It is based on the observation of some variables extracted in operation reflecting the condition of a system. The quality of health monitoring strongly depends on the selection of these variables named health indicators. However, the issue in their selection is often underestimated and their validation is, of what is known, an untreated subject. In this paper, the authors introduce a complete methodology for the selection and validation of health indicators in health monitoring systems design. Although it can be applied either downstream on real measured data or upstream on simulated data, the true interest of the method is in the latter application. Indeed, a model-based validation can be integrated in the design phases of the system development process, thereby reducing potential controller retrofit costs and useless data storage. In order to simulate the distribution of health indicators, a well known surrogate model called Kriging is utilized. Eventually, the method is tested on a benchmark system: the high pressure pump of aircraft engines fuel systems. Thanks to the method, the set of health indicators was validated in system design phases and the monitoring is now ready to be implemented for in-service operation
    • …
    corecore