45 research outputs found

    EOQ model where a portion of the defectives can be used as perfect quality

    No full text
    [[abstract]]The main purpose of this article is to investigate an economic order quantity model for products with imperfect quality, where the defective items are screened out by a 100% inspection process and then can be sold in a single batch by the end of the inspection process. However, differing from the previous studies on the topic, we assume, in this article, that a portion of the defectives (called the acceptable defective part) can be utilised as perfect quality and that the utilisation of the acceptable defective part will reduce the consumption of the remaining perfect quality items after the defectives are sold. In practice, there are a number of goods (e.g. clothes, sporting shoes, purses, porcelain dishes, fruits, vegetables, etc.) with such characteristic. First, we construct the model in terms of annual profit and find the optimal order quantity with a constant defective percentage. Next, we determine the optimal order quantity for the case that the defective percentage follows a uniform distribution by maximising the expected annual profit. For both cases, two properties of the optimal order quantity and the corresponding annual profit are also given. Finally, two numerical examples are provided to illustrate the proposed models

    Optimal selection of the most reliable product with degradation data

    No full text
    [[abstract]]At the research and development stage of a product, it is a great challenge for the manufacturer to select the most reliable design among several competing product designs which are highly reliable, since few (or even no) failures can be obtained by using traditional life tests or accelerated life tests. In such cases, if there exist product characteristics whose degradation over time can be related to reliability, then collecting degradation data can provide information about product reliability. This paper proposes a systematic approach to the selection problem with degradation data. First, an intuitively appealing selection rule is proposed, and then the optimal test plan is derived by using the criterion of minimizing the total experimental cost. The sample size, inspection frequency, and the termination time needed by the selection rule for each of the competing designs are computed by solving a nonlinear integer programming problem with a minimum probability of correct selection. Finally, an example is provided to illustrate the proposed method

    Designing a Degradation Test by Minimizing the Variance of Estimating a Product's Mean-Time-To-Failure

    No full text
    [[abstract]]Degradation tests are widely use to assess the reliability of highly reliable products. The experimental cost and the precision of the estimate of a product's lifetime in conducting a degradation test heavily depend on some decision variables such as the inspection frequency, sample size and termination time. An inappropriate choice of these decision variables not only wastes experimental resources but also reduces the precision of estimating the product's reliability. This paper deals with the optimal design of a degradation test. Given the constraint that the total cost of the experiment should not exceed a pre-determined budget, the optimal combination of sample size, inspection frequency, and measurement times is determined by minimizing the variance of the estimator of the product's mean-time-to-failure. The optimization problem is formulated as a nonlinear integer programming. An illustrative example demonstrates the effectiveness of the proposed method

    Designing a screening experiment with a reciprocal Weibull degradation rate

    No full text
    [[abstract]]Degradation tests and design of experiments are powerful techniques to improve the reliability of highly reliable products. With respect to a resolution III experiment with a linearized degradation model where the degradation rate follows a lognormal distribution, Yu and Chiao [Yu, H. F., & Chiao, C. H. (2002). An optimal designed degradation experiment for reliability improvement. IEEE Transactions on Reliability, 51 (4) 427–433] addressed the problem of how to determine the optimal settings of decision variables such as the inspection frequency, the sample size, and the termination time for each run, which are influential to the precision of identifying significant factors and the experimental cost. In practical applications, Weibull and lognormal distributions are much alike. They may fit the lifetime data well. However, their predictions may lead to a significant difference. In this paper, we will deal with the optimal design of a resolution III experiment with a linearized degradation model where the degradation rate follows a reciprocal Weibull distribution. First, an intuitively appealing identification rule is proposed. Next, under the constraints of a minimum probability of correct decision and a maximum probability of incorrect decision of the proposed identification rule, the optimal test plan is derived by using the criterion of minimizing the total cost of experiment. An example is provided to demonstrate the proposed method. Finally, a simulation study is also provided to discuss the effects of mis-specification between the models of Yu and Chiao (2002) and the present paper on identification efficiency

    Designing An Accelerated Degradation Experiment By Optimizing The Interval Estimation Of The Mean-Time-To-Failure

    No full text
    [[abstract]]Reliability is an important attribute of a product's quality. Hence, assessing a product's reliability information is an essential task of improving continously the product's quality. Accelerated degradation tests (ADTs) are widely used to assess the reliability information for very-highly reliable products whose quality characteristics degrade over time very slowly. In estimating a product's reliability information, the interval estimation is more favorite to the manufacturer than the point estimation. Several decision variables, such as the inspection frequency, the sample size, and the termination time at each stress level, are closely related to the precision of the interval estimation and the experimental cost in an ADT. Clearly, an inappropriate setting of these decision variables wastes the experimental resources as well as reduces the precision of data analysis. The purpose of this paper is to deal with the problem of designing an ADT such that the interval estimation of the mean-time-to-failure (MTTF) at use condition is efficient. More specifically, with respect to the products whose degradation rates follow a lognormal distribution, under the constraint that the total experimental cost does not exceed a pre—determined budget, a mixed nonlinear programming is built to determine the optimal combinations of these decision variables at each stress level and the optimal combination of the CIs of the parameters involved in the MTTF's expression such that the expected width of a 100(1- γ)% CI of the MTTF is minimal

    Designing a screening experiment for highly reliable products

    No full text
    [[abstract]]Within a reasonable life-testing time, how to improve the reliability of highly reliable products is one of the great challenges to today's manufacturers. By using a resolution III experiment together with degradation test, Tseng, Hamada, and Chiao (1995) presented an interesting case study of improving the reliability of fluorescent lamps. However, in conducting such an experiment, they did not address the problem of how to choose the optimal settings of variables, such as sample size, inspection frequency, and termination time for each run, which are influential to the correct identification of significant factors and the experimental cost. Assuming that the product's degradation paths satisfy Wiener processes, this paper proposes a systematic approach to the aforementioned problem. First, an intuitively appealing identification rule is proposed. Next, under the constraints of a minimum probability of correct decision and a maximum probability of incorrect decision of the proposed identification rule, the optimum test plan (including the determinations of inspection frequency, sample size, and termination time for each run) can be obtained by minimizing the total experimental cost. An example is provided to illustrate the proposed method

    Designing a Degradation Experiment with a Reciprocal Weibull Degradation Rate

    No full text
    [[abstract]]Degradation experiments are usually used to assess the lifetime distribution of highly reliable products which are not likely to fail under the traditional life tests or accelerated life tests. Several factors, such as the inspection frequency, the sample size and the termination time, are closely related to the experimental cost and the estimation precision. Obviously, an inappropriate setting of these decision variables not only wastes the experimental resources but also reduces the precision of data analysis. Recently, Yu and Tseng [17] addressed the problem of how to determine the optimal setting of these decision variables for a linearized degradation model where the degradation rate follows a lognormal distribution. In practical applications, Weibull and lognormal distributions may fit the lifetime data quite adequately. However, their predictions may lead to a significant difference. To overcome this difficulty, we first deal with the optimal design of a degradation experiment where the degradation rate follows a reciprocal Weibull distribution. Under the constraint that the total experimental cost does not exceed a pre-determined budget, the optimal decision variables are solved by minimizing the mean squared error of the estimated 100pth percentile of the lifetime distribution of the product. An example is provided to illustrate the proposed method. Finally, we also discuss the effects of using incorrect degradation rates on the optimal test plans

    Mis-Specification Analysis Between Normal and Extreme Value Distributions for a Linear Regression Model

    No full text
    [[abstract]]Normal and extreme value distributions are much alike. They may fit the data at hand well in practical applications, however, their predictions may lead to a significant difference. Recently, Kundu and Manglick (2004) considered the discrimination problem for normal and extreme value distributions for complete data. It is a pity that the impacts of mis-specification between normal and extreme value distributions on the estimation or inference in other statistical models (e.g., in regression analysis) are not studied. The main purpose of the present article is to address this issue. More specifically, for a linear regression model, the essence of this study is to investigate the impacts of mis-specification between normal and extreme value distributions on the estimation of the 100p th percentile of the response associated with a level of the independent variable. Four mean squared errors and two relative impact indexes corresponding to correct specification and mis-specification are computed. The results indicate that for both distributions, the estimation precision is significantly influenced by mis-specification. Surprisingly, this seems to shake violently the well-known assertion that with sufficiently large samples, the estimation for a linear regression model is rather robust even if the normality assumption on the error term is violated. Finally, an example is used to illustrate the proposed method

    Designing a degradation experiment

    No full text
    [[abstract]]Degradation experiments are widely used to assess the reliability of highly reliable products which are not likely to fail under the traditional life tests. In order to conduct a degradation experiment efficiently, several factors, such as the inspection frequency, the sample size, and the termination time, need to be considered carefully. These factors not only affect the experimental cost, but also affect the precision of the estimate of a product's lifetime. In this paper, we deal with the optimal design of a degradation experiment. Under the constraint that the total experimental cost does not exceed a predetermined budget, the optimal decision variables are solved by minimizing the variance of the estimated 100pth percentile of the lifetime distribution of the product. An example is provided to illustrate the proposed method. Finally, a simulation study is conducted to investigate the robustness of this proposed method
    corecore