2,928 research outputs found

    On A Truncated Accelerated Plan for Two Component Parallel Systems under Ramp-Stress Testing Using Masked Data for Weibull Distribution

    Get PDF
    Several studies on design of Acceptance Life Test (ALT) focused on a subsystem (single system) totally ignoring its internal design. In most cases, it is not always possible to identify the components that cause the system failure or the cause can only be identified by a subset of its component resulting in a masked observation. This paper therefore investigates into the development of ramp-stress accelerated life testing for a high reliability parallel system that consist of two dependent components using masked failure data. This type of testing may be very useful in a twin-engine plane or jet. A ramp-stress results when stress applied on the system increases linearly with time. A parallel system with two dependent components is taken with dependency modeled by G umbel-Hougaard copula. The stress-life relationship is modeled using inverse power law and cumulative exposure model is assumed to model the effect of changing stress. The method of maximum likelihood is thereafter used for estimating design parameters. This optimal plan consists in finding the optimal stress rate using D-optimality criterion by minimizing the reciprocal of the determinant of Fisher information matrix. The projected plan is also explained using a real life example and sensitivity analysis carried out. This formulated model can help guide and assist engineers to obtain reliability estimates quickly with high reliability products that are sustainable

    Data Analysis and Experimental Design for Accelerated Life Testing with Heterogeneous Group Effects

    Get PDF
    abstract: In accelerated life tests (ALTs), complete randomization is hardly achievable because of economic and engineering constraints. Typical experimental protocols such as subsampling or random blocks in ALTs result in a grouped structure, which leads to correlated lifetime observations. In this dissertation, generalized linear mixed model (GLMM) approach is proposed to analyze ALT data and find the optimal ALT design with the consideration of heterogeneous group effects. Two types of ALTs are demonstrated for data analysis. First, constant-stress ALT (CSALT) data with Weibull failure time distribution is modeled by GLMM. The marginal likelihood of observations is approximated by the quadrature rule; and the maximum likelihood (ML) estimation method is applied in iterative fashion to estimate unknown parameters including the variance component of random effect. Secondly, step-stress ALT (SSALT) data with random group effects is analyzed in similar manner but with an assumption of exponentially distributed failure time in each stress step. Two parameter estimation methods, from the frequentist’s and Bayesian points of view, are applied; and they are compared with other traditional models through simulation study and real example of the heterogeneous SSALT data. The proposed random effect model shows superiority in terms of reducing bias and variance in the estimation of life-stress relationship. The GLMM approach is particularly useful for the optimal experimental design of ALT while taking the random group effects into account. In specific, planning ALTs under nested design structure with random test chamber effects are studied. A greedy two-phased approach shows that different test chamber assignments to stress conditions substantially impact on the estimation of unknown parameters. Then, the D-optimal test plan with two test chambers is constructed by applying the quasi-likelihood approach. Lastly, the optimal ALT planning is expanded for the case of multiple sources of random effects so that the crossed design structure is also considered, along with the nested structure.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Validation of Ultrahigh Dependability for Software-Based Systems

    Get PDF
    Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software

    Knowledge Discovery from Complex Event Time Data with Covariates

    Get PDF
    In particular engineering applications, such as reliability engineering, complex types of data are encountered which require novel methods of statistical analysis. Handling covariates properly while managing the missing values is a challenging task. These type of issues happen frequently in reliability data analysis. Specifically, accelerated life testing (ALT) data are usually conducted by exposing test units of a product to severer-than-normal conditions to expedite the failure process. The resulting lifetime and/or censoring data are often modeled by a probability distribution along with a life-stress relationship. However, if the probability distribution and life-stress relationship selected cannot adequately describe the underlying failure process, the resulting reliability prediction will be misleading. To seek new mathematical and statistical tools to facilitate the modeling of such data, a critical question to be asked is: Can we find a family of versatile probability distributions along with a general life-stress relationship to model complex lifetime data with covariates? In this dissertation, a more general method is proposed for modeling lifetime data with covariates. Reliability estimation based on complete failure-time data or failure-time data with certain types of censoring has been extensively studied in statistics and engineering. However, the actual failure times of individual components are usually unavailable in many applications. Instead, only aggregate failure-time data are collected by actual users due to technical and/or economic reasons. When dealing with such data for reliability estimation, practitioners often face challenges of selecting the underlying failure-time distributions and the corresponding statistical inference methods. So far, only the Exponential, Normal, Gamma and Inverse Gaussian (IG) distributions have been used in analyzing aggregate failure-time data because these distributions have closed-form expressions for such data. However, the limited choices of probability distributions cannot satisfy extensive needs in a variety of engineering applications. Phase-type (PH) distributions are robust and flexible in modeling failure-time data as they can mimic a large collection of probability distributions of nonnegative random variables arbitrarily closely by adjusting the model structures. In this paper, PH distributions are utilized, for the first time, in reliability estimation based on aggregate failure-time data. To this end, a maximum likelihood estimation (MLE) method and a Bayesian alternative are developed. For the MLE method, an expectation-maximization (EM) algorithm is developed to estimate the model parameters, and the corresponding Fisher information is used to construct the confidence intervals for the quantities of interest. For the Bayesian method, a procedure for performing point and interval estimation is also introduced. Several numerical examples show that the proposed PH-based reliability estimation methods are quite flexible and alleviate the burden of selecting a probability distribution when the underlying failure-time distribution is general or even unknown

    Bayesian autoencoders for data-driven discovery of coordinates, governing equations and fundamental constants

    Full text link
    Recent progress in autoencoder-based sparse identification of nonlinear dynamics (SINDy) under â„“1\ell_1 constraints allows joint discoveries of governing equations and latent coordinate systems from spatio-temporal data, including simulated video frames. However, it is challenging for â„“1\ell_1-based sparse inference to perform correct identification for real data due to the noisy measurements and often limited sample sizes. To address the data-driven discovery of physics in the low-data and high-noise regimes, we propose Bayesian SINDy autoencoders, which incorporate a hierarchical Bayesian sparsifying prior: Spike-and-slab Gaussian Lasso. Bayesian SINDy autoencoder enables the joint discovery of governing equations and coordinate systems with a theoretically guaranteed uncertainty estimate. To resolve the challenging computational tractability of the Bayesian hierarchical setting, we adapt an adaptive empirical Bayesian method with Stochatic gradient Langevin dynamics (SGLD) which gives a computationally tractable way of Bayesian posterior sampling within our framework. Bayesian SINDy autoencoder achieves better physics discovery with lower data and fewer training epochs, along with valid uncertainty quantification suggested by the experimental studies. The Bayesian SINDy autoencoder can be applied to real video data, with accurate physics discovery which correctly identifies the governing equation and provides a close estimate for standard physics constants like gravity gg, for example, in videos of a pendulum.Comment: 28 pages, 11 figure

    Physics-based prognostic modelling of filter clogging phenomena

    Get PDF
    In industry, contaminant filtration is a common process to achieve a desired level of purification, since contaminants in liquids such as fuel may lead to performance drop and rapid wear propagation. Generally, clogging of filter phenomena is the primary failure mode leading to the replacement or cleansing of filter. Cascading failures and weak performance of the system are the unfortunate outcomes due to a clogged filter. Even though filtration and clogging phenomena and their effects of several observable parameters have been studied for quite some time in the literature, progression of clogging and its use for prognostics purposes have not been addressed yet. In this work, a physics based clogging progression model is presented. The proposed model that bases on a well-known pressure drop equation is able to model three phases of the clogging phenomena, last of which has not been modelled in the literature yet. In addition, the presented model is integrated with particle filters to predict the future clogging levels and to estimate the remaining useful life of fuel filters. The presented model has been implemented on the data collected from an experimental rig in the lab environment. In the rig, pressure drop across the filter, flow rate, and filter mesh images are recorded throughout the accelerated degradation experiments. The presented physics based model has been applied to the data obtained from the rig. The remaining useful lives of the filters used in the experimental rig have been reported in the paper. The results show that the presented methodology provides significantly accurate and precise prognostic results

    Cross layer reliability estimation for digital systems

    Get PDF
    Forthcoming manufacturing technologies hold the promise to increase multifuctional computing systems performance and functionality thanks to a remarkable growth of the device integration density. Despite the benefits introduced by this technology improvements, reliability is becoming a key challenge for the semiconductor industry. With transistor size reaching the atomic dimensions, vulnerability to unavoidable fluctuations in the manufacturing process and environmental stress rise dramatically. Failing to meet a reliability requirement may add excessive re-design cost to recover and may have severe consequences on the success of a product. %Worst-case design with large margins to guarantee reliable operation has been employed for long time. However, it is reaching a limit that makes it economically unsustainable due to its performance, area, and power cost. One of the open challenges for future technologies is building ``dependable'' systems on top of unreliable components, which will degrade and even fail during normal lifetime of the chip. Conventional design techniques are highly inefficient. They expend significant amount of energy to tolerate the device unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. Unfortunately, the additional cost introduced to compensate unreliability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor for integrated circuit performance, and energy efficiency is a top concern. Attention should be payed to tailor techniques to improve the reliability of a system on the basis of its requirements, ending up with cost-effective solutions favoring the success of the product on the market. Cross-layer reliability is one of the most promising approaches to achieve this goal. Cross-layer reliability techniques take into account the interactions between the layers composing a complex system (i.e., technology, hardware and software layers) to implement efficient cross-layer fault mitigation mechanisms. Fault tolerance mechanism are carefully implemented at different layers starting from the technology up to the software layer to carefully optimize the system by exploiting the inner capability of each layer to mask lower level faults. For this purpose, cross-layer reliability design techniques need to be complemented with cross-layer reliability evaluation tools, able to precisely assess the reliability level of a selected design early in the design cycle. Accurate and early reliability estimates would enable the exploration of the system design space and the optimization of multiple constraints such as performance, power consumption, cost and reliability. This Ph.D. thesis is devoted to the development of new methodologies and tools to evaluate and optimize the reliability of complex digital systems during the early design stages. More specifically, techniques addressing hardware accelerators (i.e., FPGAs and GPUs), microprocessors and full systems are discussed. All developed methodologies are presented in conjunction with their application to real-world use cases belonging to different computational domains

    Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems

    Get PDF
    In recent years, the physics-of-failure (POF) modeling, also referred to as mechanistic failure modeling, has emerged as a powerful approach for reliability assessment of mechanical components. The POF approach to reliability utilize scientific knowledge of degradation processes, the load profile, component architecture, material properties and environmental conditions to identify and model potential failure mechanisms that lead to failure of the item. POF models are usually used to construct the component time-to-failure distribution which is consequently used in the probabilistic reliability prediction. Distribution of time-to-failure is conditioned on the operational and environmental conditions, which can vary significantly in a dynamic system. POF modeling provides many features to include dynamic variability of the influential factors. Nevertheless, despite the considerable achievements in component reliability assessment, the POF approach lacks a formal structure to be applicable at the system-level. This issue, however, may be viewed from another perspective. That is, POF models are treated the same as the traditional hierarchical reliability models of the system such as fault/event trees and reliability block diagrams that are not concerned with capturing the causality of failures. In this research a framework is proposed to bring the POF-based reliability models of components into the system-level reliability assessment. Consider a virtual environment in which each component is replaced with a piece of intelligent software that not only contains all properties of the component, but also is able to mimic all its behaviors. This substitute contains all available knowledge about the failure of the component and acts autonomously. This replica of the component is also able to communicate with other components and not only has memory to keep the history of events, but also is able to share information to include functional dependencies. In this research, POF models are used to make a robust real-time simulation that mimics the failure processes applicable to the components and the system. Utilizing this approach, system-level modeling becomes as simple as checking the status of components at any given time. This research is an attempt to borrow "Agent Autonomy" concept from artificial intelligence (AI) and adapt it to system-level reliability modeling purposes. Agent programming is one of the most advanced methods in modeling of Multi Agents Systems (MAS). In this dissertation the terminology of agent autonomy is represented in the reliability engineering context using case studies, such that the equivalent terms and conditions are defined and practical advantages are highlighted
    • …
    corecore