1,002 research outputs found

    Planning and inference of sequential accelerated life tests

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Big Data and Reliability Applications: The Complexity Dimension

    Full text link
    Big data features not only large volumes of data but also data with complicated structures. Complexity imposes unique challenges in big data analytics. Meeker and Hong (2014, Quality Engineering, pp. 102-116) provided an extensive discussion of the opportunities and challenges in big data and reliability, and described engineering systems that can generate big data that can be used in reliability analysis. Meeker and Hong (2014) focused on large scale system operating and environment data (i.e., high-frequency multivariate time series data), and provided examples on how to link such data as covariates to traditional reliability responses such as time to failure, time to recurrence of events, and degradation measurements. This paper intends to extend that discussion by focusing on how to use data with complicated structures to do reliability analysis. Such data types include high-dimensional sensor data, functional curve data, and image streams. We first provide a review of recent development in those directions, and then we provide a discussion on how analytical methods can be developed to tackle the challenging aspects that arise from the complexity feature of big data in reliability applications. The use of modern statistical methods such as variable selection, functional data analysis, scalar-on-image regression, spatio-temporal data models, and machine learning techniques will also be discussed.Comment: 28 pages, 7 figure

    Trends in the Statistical Assessment of Reliability

    Get PDF
    Changes in technology have had and will continue to have a strong effect on changes in the area of statistical assessment of reliability data. These changes include higher levels of integration in electronics, improvements in measurement technology and the deployment of sensors and smart chips into more products, dramatically improved computing power and storage technology, and the development of new, powerful statistical methods for graphics, inference, and experimental design and reliability test planning. This paper traces some of the history of the development of statistical methods for reliability assessment and makes some predictions about the future

    Reliability Analysis And Optimal Maintenance Planning For Repairable Multi-Component Systems Subject To Dependent Competing Risks

    Get PDF
    Modern engineering systems generally consist of multiple components that interact in a complex manner. Reliability analysis of multi-component repairable systems plays a critical role for system safety and cost reduction. Establishing reliability models and scheduling optimal maintenance plans for multi-component repairable systems, however, is still a big challenge when considering the dependency of component failures. Existing models commonly make prior assumptions, without statistical verification, as to whether different component failures are independent or not. In this dissertation, data-driven systematic methodologies to characterize component failure dependency of complex systems are proposed. In CHAPTER 2, a parametric reliability model is proposed to capture the statistical dependency among different component failures under partially perfect repair assumption. Based on the proposed model, statistical hypothesis tests are developed to test the dependency of component failures. In CHAPTER 3, two reliability models for multi-component systems with dependent competing risks under imperfect assumptions are proposed, i.e., generalized dependent latent age model and copula-based trend-renewal process model. The generalized dependent latent age model generalizes the partially perfect repair model by involving the extended virtual age concept. And the copula-based trend renewal process model utilizes multiple trend functions to transform the failure times from original time domain to a transformed time domain, in which the repair conditions can be treated as partially perfect. Parameter estimation methods for both models are developed. In CHAPTER 4, based on the generalized dependent latent age model, two periodic inspection-based maintenance polices are developed for a multi-component repairable system subject to dependent competing risks. The first maintenance policy assumes all the components are restored to as good as new once a failure detected, i.e., the whole system is replaced. The second maintenance policy considers the partially perfect repair, i.e., only the failed component can be replaced after detection of failures. Both the maintenance policies are optimized with the aim to minimize the expected average maintenance cost per unit time. The developed methodologies are demonstrated by using applications of real engineering systems

    Accelerated degradation tests planning with competing failure modes

    Get PDF
    Accelerated degradation tests (ADT) have been widely used to assess the reliability of products with long lifetime. For many products, environmental stress not only accelerates their degradation rate but also elevates the probability of traumatic shocks. When random traumatic shocks occur during an ADT, it is possible that the degradation measurements cannot be taken afterward, which brings challenges to reliability assessment. In this paper, we propose an ADT optimization approach for products suffering from both degradation failures and random shock failures. The degradation path is modeled by a Wiener process. Under various stress levels, the arrival process of random shocks is assumed to follow a nonhomogeneous Poisson process. Parameters of acceleration models for both failure modes need to be estimated from the ADT. Three common optimality criteria based on the Fisher information are considered and compared to optimize the ADT plan under a given number of test units and a predetermined test duration. Optimal two- and three-level optimal ADT plans are obtained by numerical methods. We use the general equivalence theorems to verify the global optimality of ADT plans. A numerical example is presented to illustrate the proposed methods. The result shows that the optimal ADT plans in the presence of random shocks differ significantly from the traditional ADT plans. Sensitivity analysis is carried out to study the robustness of optimal ADT plans with respect to the changes in planning input

    ISBIS 2016: Meeting on Statistics in Business and Industry

    Get PDF
    This Book includes the abstracts of the talks presented at the 2016 International Symposium on Business and Industrial Statistics, held at Barcelona, June 8-10, 2016, hosted at the Universitat Politècnica de Catalunya - Barcelona TECH, by the Department of Statistics and Operations Research. The location of the meeting was at ETSEIB Building (Escola Tecnica Superior d'Enginyeria Industrial) at Avda Diagonal 647. The meeting organizers celebrated the continued success of ISBIS and ENBIS society, and the meeting draw together the international community of statisticians, both academics and industry professionals, who share the goal of making statistics the foundation for decision making in business and related applications. The Scientific Program Committee was constituted by: David Banks, Duke University Amílcar Oliveira, DCeT - Universidade Aberta and CEAUL Teresa A. Oliveira, DCeT - Universidade Aberta and CEAUL Nalini Ravishankar, University of Connecticut Xavier Tort Martorell, Universitat Politécnica de Catalunya, Barcelona TECH Martina Vandebroek, KU Leuven Vincenzo Esposito Vinzi, ESSEC Business Schoo
    corecore