6 research outputs found

    Bayesian reliability demonstration with multiple independent tasks.

    No full text
    We consider optimal testing of a system in order to demonstrate reliability with regard to its use in a process after testing, where the system has to function for different types of tasks, which we assume to be independent. We explicitly assume that testing reveals zero failures. The optimal numbers of tasks to be tested are derived by optimisation of a cost criterion, taking into account the costs of testing and the costs of failures in the process after testing, assuming that such failures are not catastrophic to the system. Cost and time constraints on testing are also included in the analysis. We focus on study of the optimal numbers of tests for different types of tasks, depending on the arrival rate of tasks in the process and the costs involved. We briefly compare the results of this study with optimal test numbers in a similar setting, but with an alternative optimality criterion which is more suitable in case of catastrophic failures, as presented elsewhere. For these two different optimality criteria, the optimal numbers to be tested depend similarly on the costs of testing per type and on the arrival rates of tasks in the process after testing

    Maximum group sizes for simultaneous testing in high potential risk scenarios

    No full text
    When tests are performed in scenarios such as reliability demonstration, two extreme possibilities are to perform all required tests simultaneously or to test all units sequentially. From the perspective of time for testing, the former is typically preferred, but in high potential risk scenarios, for example in the case of possibly disastrous results if a tested unit fails, it is better to have the opportunity to stop testing after a failure occurs. An analogon appears in medical testing, with patients being the ‘units’, if new medication is to be tested to confirm its functionality while possibly severe (side) effects are not yet known. There is a wide range of test scenarios in between these two extremes, with groups of units being tested simultaneously. This article discusses such scenarios in a basic setting, assuming that the total number of required tests has been set, for example based on other criteria or legislation. A new criterion for guidance on suitable test group sizes is presented. Throughout, the aim is for high reliability, with testing stopped in case any unit fails, following which the units will not be approved. Any consecutive actions, such as improvement of the units or dismissing them, are not part of the main considerations in this article. While in practice the development of complex models and decision approaches may appear to be required, a straightforward argument is presented, which leads to results that can be widely applied and easily communicated
    corecore