22,277 research outputs found

    Big Data and Reliability Applications: The Complexity Dimension

    Full text link
    Big data features not only large volumes of data but also data with complicated structures. Complexity imposes unique challenges in big data analytics. Meeker and Hong (2014, Quality Engineering, pp. 102-116) provided an extensive discussion of the opportunities and challenges in big data and reliability, and described engineering systems that can generate big data that can be used in reliability analysis. Meeker and Hong (2014) focused on large scale system operating and environment data (i.e., high-frequency multivariate time series data), and provided examples on how to link such data as covariates to traditional reliability responses such as time to failure, time to recurrence of events, and degradation measurements. This paper intends to extend that discussion by focusing on how to use data with complicated structures to do reliability analysis. Such data types include high-dimensional sensor data, functional curve data, and image streams. We first provide a review of recent development in those directions, and then we provide a discussion on how analytical methods can be developed to tackle the challenging aspects that arise from the complexity feature of big data in reliability applications. The use of modern statistical methods such as variable selection, functional data analysis, scalar-on-image regression, spatio-temporal data models, and machine learning techniques will also be discussed.Comment: 28 pages, 7 figure

    A Review of Accelerated Test Models

    Get PDF
    Engineers in the manufacturing industries have used accelerated test (AT) experiments for many decades. The purpose of AT experiments is to acquire reliability information quickly. Test units of a material, component, subsystem or entire systems are subjected to higher-than-usual levels of one or more accelerating variables such as temperature or stress. Then the AT results are used to predict life of the units at use conditions. The extrapolation is typically justified (correctly or incorrectly) on the basis of physically motivated models or a combination of empirical model fitting with a sufficient amount of previous experience in testing similar units. The need to extrapolate in both time and the accelerating variables generally necessitates the use of fully parametric models. Statisticians have made important contributions in the development of appropriate stochastic models for AT data [typically a distribution for the response and regression relationships between the parameters of this distribution and the accelerating variable(s)], statistical methods for AT planning (choice of accelerating variable levels and allocation of available test units to those levels) and methods of estimation of suitable reliability metrics. This paper provides a review of many of the AT models that have been used successfully in this area.Comment: Published at http://dx.doi.org/10.1214/088342306000000321 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Accelerated Destructive Degradation Tests Robust to Distribution Misspecification

    Get PDF
    Accelerated repeated-measures degradation tests (ARMDTs) take measurements of degradation or performance on a sample of units over time. In certain products, measurements are destructive leading to accelerated destructive degradation test (ADDT) data. For example, the test of a adhesive bond needs to break the test specimen to measure the strength of the bond. Lognormal and Weibull distributions are often used to describe the distribution of product characteristics in life and degradation tests. When the distribution is misspecified, the lifetime quantile, often of interest to the practitioner, may differ significantly between these two distributions. In this study, under a specific ADDT, we investigate the bias and variance due to distribution misspecification. We suggest robust test plans under the criteria of minimizing the approximate mean square error

    Continuous maintenance and the future – Foundations and technological challenges

    Get PDF
    High value and long life products require continuous maintenance throughout their life cycle to achieve required performance with optimum through-life cost. This paper presents foundations and technologies required to offer the maintenance service. Component and system level degradation science, assessment and modelling along with life cycle ‘big data’ analytics are the two most important knowledge and skill base required for the continuous maintenance. Advanced computing and visualisation technologies will improve efficiency of the maintenance and reduce through-life cost of the product. Future of continuous maintenance within the Industry 4.0 context also identifies the role of IoT, standards and cyber security

    Methods for planning repeated measures degradation tests

    Get PDF
    The failure mechanism of an item often can be linked directly to some sort of degradation process. This degradation process eventually weakens the item which then induces a failure. As system components have become highly reliable, traditional life tests, where the response is time to failure, provide few or no failures during the life of a study. For such situations, degradation data can sometimes provide more information for assessing the item\u27s reliability. Repeated measures degradation is a form of degradation where the engineers are able to make multiple nondestructive measurements of the item\u27s level of degradation. For some items, however, the degradation rates at nominal use conditions are so low that no meaningful information can be extracted. Thus the engineers will use accelerating methods to increase the degradation rate. Before a test can be performed, the engineers need to know the number of items to test, the points of time to make the measurements, and at what values of the accelerating variable should the units be exposed in order to achieve the best estimation precision possible. In this thesis we study the test planning methods for designing repeated measures degradation and accelerated degradation tests. First, Chapter 2 provides methods for selecting the number of units and the number of measurements per unit for repeated measures degradation tests without acceleration. Selection of these testing parameters is based on the asymptotic standard error of an estimator of a function of the model parameters. These methods can also be used to assess how the estimation precision changes as a function of the number of units and measurements per items. Chapter 3 describes methods for planning repeated measures accelerated degradation tests (RMADTs) where the engineers need to know the accelerated conditions at which the items should be tested. Chapter 4 is similar to Chapter 3, but uses a Bayesian approach for planning RMADTs

    Aging concrete structures: a review of mechanics and concepts

    Get PDF
    The safe and cost-efficient management of our built infrastructure is a challenging task considering the expected service life of at least 50 years. In spite of time-dependent changes in material properties, deterioration processes and changing demand by society, the structures need to satisfy many technical requirements related to serviceability, durability, sustainability and bearing capacity. This review paper summarizes the challenges associated with the safe design and maintenance of aging concrete structures and gives an overview of some concepts and approaches that are being developed to address these challenges
    • …
    corecore