276 research outputs found

    Easy estimation by a new parameterization for the three-parameter lognormal distribution

    Full text link
    A new parameterization and algorithm are proposed for seeking the primary relative maximum of the likelihood function in the three-parameter lognormal distribution. The parameterization yields the dimension reduction of the three-parameter estimation problem to a two-parameter estimation problem on the basis of an extended lognormal distribution. The algorithm provides the way of seeking the profile of an object function in the two-parameter estimation problem. It is simple and numerically stable because it is constructed on the basis of the bisection method. The profile clearly and easily shows whether a primary relative maximum exists or not, and also gives a primary relative maximum certainly if it exists.Comment: 13 pages, 3 figure

    Pandemic Simulations by MADE: A Combination of Multi-agent and Differential Equations, with Novel Influenza A(H1N1) Case

    Get PDF
    Two pandemic simulation approaches are known: the multi-agent simulation model and the differential equation model. The multi-agent model can deal with detailed simulations under a variety of initial and boundary conditions with standard social network models; however, the computing cost is high. The differential equation model can quickly deal with simulations for homogeneous populations with simultaneous ordinary differential equations and a few parameters; however, it lacks versatility in its use.We propose a new method named the MADE which is a combination of these two models, such that we use the multi-agent model in the early stage in a simulation to determine the parameters that can be used in the differential equation model, and then use the differential equation model in the subsequent stage. With this method, we may deal with pandemic simulations for real social structures with lower computing costs. Contrary to the statistical inference method which could not predict the final stage unless abundant information is included, the MADE have a possibility to do that only with the earlier stage information. The newly emerged pandemic, the novel influenza A(H1N1) case in 2009, is dealt with

    Lifetime Estimation of Defective Products from the Imaginal Mixture of Defective and Non-defective Products: The Trunsored Data Model

    Get PDF
    In solving the lifetime estimation problem of defective products using samples of size N from a mixture of defective and non-defective products, a new method of estimating the parameters of the underlying distribution function is proposed. We suppose that the ratio of the defective products to the non-defective products is unknown. The proposed method is to use an imaginal mixture model in which the non-defective products will never fail by the prescribed time T. If the non-defective products are dominant in the mixture, we can obtain the maximum likelihood estimates by dealing with the observed samples as truncated data with a conditional likelihood. If the defective products are dominant, we can no longer use the truncated data approach because the estimated sample size could end up being larger than N in which case the non-defective products would be empty. The imaginal mixture model, however, can estimate the parameters in either case. In addition, this model can test whether the non-defective products are empty or not because the likelihood functions for both the cases are of the same kind, whereas we cannot use the likelihood ratio test directly by using the likelihoods from the truncated model and the censored model due to the different kinds of likelihood. Thus, we call this versatile model which can be used for both the truncated and censored data models the trunsored data model. If it is not rejected that nondefective products are empty, we can regard the sampled data as censored, and can obtain smaller confidence intervals of the estimates of the parameters than those obtained by the trunsored model. After the introduction of this new mixture model, we apply it to the actual field data, and show how the proposed method works

    The mixed trunsored model with applications to SARS in detail

    Get PDF
    The trunsored model, which is a new incomplete data model regarded as a unified model of the censored and truncated models in lifetime analysis, can not only estimate the ratio of the fragile population to the mixed fragile and durable populations or the cured and fatal mixed populations, but also test a hypothesis that the ratio is equal to a prescribed value with ease.Since SARS showed a severe case fatality ratio, our concern is to know such a case fatality ratio as soon as possible after a similar outbreak begins. The epidemiological determinants of spread of SARS can be dealt with as the probabilistic growth curve models, and the parameter estimation procedure for the probabilistic growth curve models may similarly be treated as the lifetime analysis. Thus, we try to do the parameter estimation to the SARS cases for the infected cases, fatal cases, and cured cases here, as we usually do it in the lifetime analysis. Using the truncated data models to the infected and fatal cases with some censoring time, we may estimate the total (or final) numbers of the patients and deaths, and the case fatality ratio may be estimated by these two numbers. We may also estimate the case fatality ratio using the numbers of the patients and recoveries, but this estimate differs from that using the numbers of the patients and deaths, especially when the censoring time is located at early stages.To circumvent this inconsistency, we propose a mixed trunsored model, an extension of the trunsored model, which can use the data of the patients, deaths, and recoveries simultaneously. The estimate of the case fatality ratio and its confidence interval are easily obtained in a numerical sense.This paper mainly treats the case in Hong Kong. The estimated epidemiological determinants of spread of SARS, fitted to the infected, fatal, and cured cases in Hong Kong, could be the logistic distribution function among the logistic, lognormal, gamma, and Weibull models. Using the proposed method, it would be appropriate that the SARS case fatality ratio is roughly estimated to be 17% in Hong Kong. Worldwide, it is roughly estimated to be about 12-18%, if we consider the safety side without the Chinese case.Unlike the questionably small confidence intervals for the case fatality ratio using the truncated models, the case fatality ratio in the proposed model provides a reasonable confidence interval

    Bump Hunting using the Tree-GA

    Get PDF
    The bump hunting is to find the regions where points we are interested in are located more densely than elsewhere and are hardly separable from other points. By specifying a pureness rate p for the points, a maximum capture rate c of the points could be obtained. Then, a trade-off curve between p and c can be constructed. Thus, to find the bump regions is equivalent to construct the trade-off curve. We adopt simpler boundary shapes for the bumps such as the box-shaped regions located parallel to variable axes for convenience. We use the genetic algorithm, specified to the tree structure, called the tree-GA, to obtain the maximum capture rates, because the conventional binary decision tree will not provide the maximum capture rates. Using the tree-GA tendency providing many local maxima for the capture rates, we can estimate the return period for the trade-off curve by using the extremevalue statistics. We have assessed the accuracy for the trade-off curve in typical fundamental cases that may be observed in real customer data cases, and found that the proposed tree-GA can construct the effective trade-off curve which is close to the optimal one

    Bump huntingとその顧客データへの応用

    Get PDF
    In difficult classification problems of the z-dimensional points into two groups having 0-1 responses due to the messy data structure, it is more favorable to search for the denser regions for the response 1 assigned points than to find the boundaries to separate the two groups. To such problems often seen in customer databases, we have developed a bump hunting method using probabilistic and statistical methods. By specifying a pureness rate in advance, a maximum capture rate will be obtained.Then, a trade-off curve between the pureness rate and the capture rate can be constructed. In finding the maximum capture rate, we have used the decision tree method combined with the genetic algorithm. We first explain a brief introduction of our research: what the bump hunting is, the trade-off curve between the pureness rate and the capture rate, the bump hunting using the tree genetic algorithm, the upper bounds for the trade-off curve using the extreme-value statistics. Then, the assessment for the accuracy of the trade-off curve is tackled from the genetic algorithm procedure viewpoint. Using the new genetic algorithm procedure proposed, we can obtain the upper bound accuracy for the trade-off curve. Then, we may expect the actually attainable trade-off curve upper bound. The bootstrapped hold-out method is used in assessing the accuracy of the trade-off curve, as well as the cross validation method

    Maximum likelihood estimation in a mixture regression model using the EM algorithm

    Get PDF
    To an extremely difficult problem of finding the maximum likelihood estimates in a specific mixture regression model, a combination of several optimization techniques is found to be useful. These algorithms are the continuation method, Newton-Raphson method, and simplex method. The simplex method finds a globally approximate solution, then a combination of the continuation method and the Newton-Raphson method finds a more accurate solution. In this paper, this combination method is applied to find the maximum likelihood estimates in a Weibull-power-law type regression model, as well as the well-known methods like the EM algorithm, is discussed in this paper
    corecore