5,430 research outputs found

    On nonparametric estimation of a mixing density via the predictive recursion algorithm

    Full text link
    Nonparametric estimation of a mixing density based on observations from the corresponding mixture is a challenging statistical problem. This paper surveys the literature on a fast, recursive estimator based on the predictive recursion algorithm. After introducing the algorithm and giving a few examples, I summarize the available asymptotic convergence theory, describe an important semiparametric extension, and highlight two interesting applications. I conclude with a discussion of several recent developments in this area and some open problems.Comment: 22 pages, 5 figures. Comments welcome at https://www.researchers.one/article/2018-12-

    Efficient training algorithms for HMMs using incremental estimation

    Get PDF
    Typically, parameter estimation for a hidden Markov model (HMM) is performed using an expectation-maximization (EM) algorithm with the maximum-likelihood (ML) criterion. The EM algorithm is an iterative scheme that is well-defined and numerically stable, but convergence may require a large number of iterations. For speech recognition systems utilizing large amounts of training material, this results in long training times. This paper presents an incremental estimation approach to speed-up the training of HMMs without any loss of recognition performance. The algorithm selects a subset of data from the training set, updates the model parameters based on the subset, and then iterates the process until convergence of the parameters. The advantage of this approach is a substantial increase in the number of iterations of the EM algorithm per training token, which leads to faster training. In order to achieve reliable estimation from a small fraction of the complete data set at each iteration, two training criteria are studied; ML and maximum a posteriori (MAP) estimation. Experimental results show that the training of the incremental algorithms is substantially faster than the conventional (batch) method and suffers no loss of recognition performance. Furthermore, the incremental MAP based training algorithm improves performance over the batch versio

    Robust Bayesian Analysis of Loss Reserves Data Using the Generalized-t Distribution

    Get PDF
    This paper presents a Bayesian approach using Markov chain Monte Carlo methods and the generalized-t (GT) distribution to predict loss reserves for the insurance companies. Existing models and methods cannot cope with irregular and extreme claims and hence do not offer an accurate prediction of loss reserves. To develop a more robust model for irregular claims, this paper extends the conventional normal error distribution to the GT distribution which nests several heavytailed distributions including the Student-t and exponential power distributions. It is shown that the GT distribution can be expressed as a scale mixture of uniforms (SMU) distribution which facilitates model implementation and detection of outliers by using mixing parameters. Different models for the mean function, including the log-ANOVA, log-ANCOVA, state space and threshold models, are adopted to analyze real loss reserves data. Finally, the best model is selected according to the deviance information criterion (DIC).Bayesian approach; state space model; threshold model; scale mixtures of uniform distribution; device information criterion
    • …
    corecore