2,203 research outputs found

    Inference in Hidden Markov Models with Explicit State Duration Distributions

    Full text link
    In this letter we borrow from the inference techniques developed for unbounded state-cardinality (nonparametric) variants of the HMM and use them to develop a tuning-parameter free, black-box inference procedure for Explicit-state-duration hidden Markov models (EDHMM). EDHMMs are HMMs that have latent states consisting of both discrete state-indicator and discrete state-duration random variables. In contrast to the implicit geometric state duration distribution possessed by the standard HMM, EDHMMs allow the direct parameterisation and estimation of per-state duration distributions. As most duration distributions are defined over the positive integers, truncation or other approximations are usually required to perform EDHMM inference

    Bayesian adaptive learning of the parameters of hidden Markov model for speech recognition

    Get PDF
    A theoretical framework for Bayesian adaptive training of the parameters of a discrete hidden Markov model (DHMM) and of a semi-continuous HMM (SCHMM) with Gaussian mixture state observation densities is presented. In addition to formulating the forward-backward MAP (maximum a posteriori) and the segmental MAP algorithms for estimating the above HMM parameters, a computationally efficient segmental quasi-Bayes algorithm for estimating the state-specific mixture coefficients in SCHMM is developed. For estimating the parameters of the prior densities, a new empirical Bayes method based on the moment estimates is also proposed. The MAP algorithms and the prior parameter specification are directly applicable to training speaker adaptive HMMs. Practical issues related to the use of the proposed techniques for HMM-based speaker adaptation are studied. The proposed MAP algorithms are shown to be effective especially in the cases in which the training or adaptation data are limited.published_or_final_versio

    Optimal Number of States in Hidden Markov Models and its Application to the Detection of Human Movement

    Get PDF
    In this paper, Hidden Markov Model is applied to model human movements as to facilitate an automatic  detection of the same. A number of activities were simulated with the help of two persons. The four  movements considered are walking, sitting down-getting up, fall while walking and fall while standing. The data is acquired using a biaxial accelerometer attached to the person’s body. Data of the four body gestures were then trained to construct several Hidden Markov models for the two people. The problem is to get a good  representation of the data in terms of the number of states of the HMM. Standard general methods used for training pose some drawbacks i.e. the computational burden and initialisation process for the model estimate.  For this reason, a sequential pruning strategy is implemented to address the problems mentioned.Keywords: Hidden Markov Models, sequential pruning strategy, Bayesian Inference Criterio

    Segmental K-Means Learning with Mixture Distribution for HMM Based Handwriting Recognition

    Full text link
    This paper investigates the performance of hidden Markov models (HMMs) for handwriting recognition. The Segmental K-Means algorithm is used for updating the transition and observation probabilities, instead of the Baum-Welch algorithm. Observation probabilities are modelled as multi-variate Gaussian mixture distributions. A deterministic clustering technique is used to estimate the initial parameters of an HMM. Bayesian information criterion (BIC) is used to select the topology of the model. The wavelet transform is used to extract features from a grey-scale image, and avoids binarization of the image.</p

    Fitting Jump Models

    Get PDF
    We describe a new framework for fitting jump models to a sequence of data. The key idea is to alternate between minimizing a loss function to fit multiple model parameters, and minimizing a discrete loss function to determine which set of model parameters is active at each data point. The framework is quite general and encompasses popular classes of models, such as hidden Markov models and piecewise affine models. The shape of the chosen loss functions to minimize determine the shape of the resulting jump model.Comment: Accepted for publication in Automatic
    • …
    corecore