330 research outputs found

    A cost model for managing producer and consumer risk in availability demonstration testing.

    Get PDF
    Evaluation and demonstration of system performance against specified requirements is an essential element of risk reduction during the design, development, and production phases of a product lifecycle. Typical demonstration testing focuses on reliability and maintainability without consideration for availability. A practical reason considers the fact that demonstration testing for availability cannot be performed until very late in the product lifecycle when production representative units become available and system integration is completed. At this point, the requirement to field the system often takes priority over demonstration of availability performance. Without proper validation testing, the system can be fielded with reduced mission readiness and increased lifecycle cost. The need exists for availability demonstration testing (ADT) with emphasis on managing risk while minimizing the cost to the user. Risk management must ensure a test strategy that adequately considers producer and consumer risk objectives. This research proposes a methodology for ADT that provides managers and decision makers an improved ability to distinguish between high and low availability systems. A new availability demonstration test methodology is defined that provides a useful strategy for the consumer to mitigate significant risk without sacrificing the cost of time to field a product or capability. A surface navy electronic system case study supports the practical implementation of this methodology using no more than a simple spreadsheet tool for numerical analysis. Development of this method required three significant components which add to the existing body of knowledge. The first was a comparative performance assessment of existing ADT strategies to understand if any preferences exist. The next component was the development of an approach for ADT design that effectively considers time constraints on the test duration. The third component was the development of a procedure for an ADT design which provides awareness of risk levels in time-constrained ADT, and offers an evaluation of alternatives to select the best sub-optimal test plan. Comparison of the different ADT strategies utilized a simulation model to evaluate runs specified by a five-factor, full-factorial design of experiments. Analysis of variance verified that ADT strategies are significantly different with respect to output responses quality of decision and timeliness. Analysis revealed that the fixed number of failure ADT strategy has the lowest deviation from estimated producer and consumer risk, the measure of quality. The sequential ADT strategy had an average error 3.5 times larger and fixed test time strategies displayed error rates 8.5 to 12.7 larger than the best. The fixed test time strategies had superior performance in timeliness, measured by average test duration. The sequential strategy took 24% longer on average, and the fixed number of failure strategy took 2.5 times longer on average than the best. The research evaluated the application of a time constraint on ADT, and determined an increase in producer and consumer risk levels results when test duration is limited from its optimal value. It also revealed that substitution of a specified time constraint formatted for a specific test strategy produced a pair of dependent relationships between risk levels and the critical test value. These relationships define alternative test plans and could be analyzed in a cost context to compare and select the low cost alternative test plan. This result led to the specification of a support tool to enable a decision maker to understand changes to a and ß resulting from constraint of test duration, and to make decisions based on the true risk exposure. The output of this process is a time-constrained test plan with known producer and consumer risk levels

    Accelerated degradation tests planning with competing failure modes

    Get PDF
    Accelerated degradation tests (ADT) have been widely used to assess the reliability of products with long lifetime. For many products, environmental stress not only accelerates their degradation rate but also elevates the probability of traumatic shocks. When random traumatic shocks occur during an ADT, it is possible that the degradation measurements cannot be taken afterward, which brings challenges to reliability assessment. In this paper, we propose an ADT optimization approach for products suffering from both degradation failures and random shock failures. The degradation path is modeled by a Wiener process. Under various stress levels, the arrival process of random shocks is assumed to follow a nonhomogeneous Poisson process. Parameters of acceleration models for both failure modes need to be estimated from the ADT. Three common optimality criteria based on the Fisher information are considered and compared to optimize the ADT plan under a given number of test units and a predetermined test duration. Optimal two- and three-level optimal ADT plans are obtained by numerical methods. We use the general equivalence theorems to verify the global optimality of ADT plans. A numerical example is presented to illustrate the proposed methods. The result shows that the optimal ADT plans in the presence of random shocks differ significantly from the traditional ADT plans. Sensitivity analysis is carried out to study the robustness of optimal ADT plans with respect to the changes in planning input

    Theory and Practice of Supply Chain Synchronization

    Get PDF
    In this dissertation, we develop strategies to synchronize component procurement in assemble-to-order (ATO) production and overhaul operations. We focus on the high-tech and mass customization industries which are not only considered to be very important to create or keep U.S. manufacturing jobs, but also suffer most from component inventory burden. In the second chapter, we address the deterministic joint replenishment inventory problem with batch size constraints (JRPB). We characterize system regeneration points, derive a closed-form expression of the average product inventory, and formulate the problem of finding the optimal joint reorder interval to minimize inventory and ordering costs per unit of time. Thereafter, we discuss exact solution approaches and the case of variable reorder intervals. Computational examples demonstrate the power of our methodology. In the third chapter, we incorporate stochastic demand to the JRPB. We propose a joint part replenishment policy that balances inventory and ordering costs while providing a desired service level. A case study and guided computational experiments show the magnitudes of savings that are possible using our methodology. In the fourth chapter, we show how lack of synchronization in assembly systems with long and highly variable component supply lead times can rapidly deteriorate system performance. We develop a full synchronization strategy through time buffering of component orders, which not only guarantees meeting planned production dates but also drastically reduces inventory holding costs. A case study has been carried out to prove the practical relevance, assess potential risks, and evaluate phased implementation policies. The fifth chapter explores the use of condition information from a large number of distributed working units in the field to improve the management of the inventory of spare parts required to maintain those units. Synchronization is again paramount here since spare part inventory needs to adapt to the condition of the engine fleet. All needed parts must be available to complete the overhaul of a unit. We develop a complex simulation environment to assess the performance of different inventory policies and the value of health monitoring. The sixth chapter concludes this dissertation and outlines future research plans as well as opportunities

    Addressing Complexity and Intelligence in Systems Dependability Evaluation

    Get PDF
    Engineering and computing systems are increasingly complex, intelligent, and open adaptive. When it comes to the dependability evaluation of such systems, there are certain challenges posed by the characteristics of “complexity” and “intelligence”. The first aspect of complexity is the dependability modelling of large systems with many interconnected components and dynamic behaviours such as Priority, Sequencing and Repairs. To address this, the thesis proposes a novel hierarchical solution to dynamic fault tree analysis using Semi-Markov Processes. A second aspect of complexity is the environmental conditions that may impact dependability and their modelling. For instance, weather and logistics can influence maintenance actions and hence dependability of an offshore wind farm. The thesis proposes a semi-Markov-based maintenance model called “Butterfly Maintenance Model (BMM)” to model this complexity and accommodate it in dependability evaluation. A third aspect of complexity is the open nature of system of systems like swarms of drones which makes complete design-time dependability analysis infeasible. To address this aspect, the thesis proposes a dynamic dependability evaluation method using Fault Trees and Markov-Models at runtime.The challenge of “intelligence” arises because Machine Learning (ML) components do not exhibit programmed behaviour; their behaviour is learned from data. However, in traditional dependability analysis, systems are assumed to be programmed or designed. When a system has learned from data, then a distributional shift of operational data from training data may cause ML to behave incorrectly, e.g., misclassify objects. To address this, a new approach called SafeML is developed that uses statistical distance measures for monitoring the performance of ML against such distributional shifts. The thesis develops the proposed models, and evaluates them on case studies, highlighting improvements to the state-of-the-art, limitations and future work
    corecore