1 research outputs found

    GMM AND HMM TRAINING BY AGGREGATED EM ALGORITHM WITH INCREASED ENSEMBLE SIZES FOR ROBUST PARAMETER ESTIMATION

    No full text
    In order to compensate for the weaknesses of the expectation maximization (EM) algorithm to over-training and to improve model performance for new data, we have recently proposed aggregated EM (Ag-EM) algorithm that introduces bagginglike approach in the framework of the EM algorithm and have shown that it gives similar improvements as cross-validation EM (CV-EM) over conventional EM. However, a limitation with the experiments was that the number of multiple models used in the aggregation operation or the ensemble size was �xed to a small value. Here, we investigate the relationship between the ensemble size and the performance as well as giving a theoretical discussion with the order of the computational cost. The algorithm is �rst analyzed using simulated data and then applied to large vocabulary speech recognition on oral presentations. Both of these experiments show that Ag-EM outperforms CV-EM by using larger ensemble sizes. Index Terms — Expectation maximization algorithm, ensemble training, bagging, suf�cient statistics, hidden Markov model 1
    corecore