28 research outputs found

    Segmental K-Means Learning with Mixture Distribution for HMM Based Handwriting Recognition

    Full text link
    This paper investigates the performance of hidden Markov models (HMMs) for handwriting recognition. The Segmental K-Means algorithm is used for updating the transition and observation probabilities, instead of the Baum-Welch algorithm. Observation probabilities are modelled as multi-variate Gaussian mixture distributions. A deterministic clustering technique is used to estimate the initial parameters of an HMM. Bayesian information criterion (BIC) is used to select the topology of the model. The wavelet transform is used to extract features from a grey-scale image, and avoids binarization of the image.</p

    Hmm-based monitoring of packet channels

    Get PDF
    Abstract. Performance of real-time applications on network communication channels are strongly related to losses and temporal delays. Several studies showed that these network features may be correlated and exhibit a certain degree of memory such as bursty losses and delays. The memory and the statistical dependence between losses and temporal delays suggest that the channel may be well modelled by a Hidden Markov Model (HMM) with appropriate hidden variables that capture the current state of the network. In this paper we discuss on the effectiveness of using an HMM to model jointly loss and delay behavior of real communication channel. Excellent performance in modelling typical channel behavior in a set of real packet links are observed. The system parameters are found via a modified version of the EM algorithm. Hidden state analysis shows how the state variables characterize channel dynamics. State-sequence estimation is obtained by use of the Viterbi algorithm. Real-time modelling of the channel is the first step to implement adaptive communication strategies.

    Evaluation of relevance of stochastic parameters on Hidden Markov Models

    Full text link
    International audiencePrediction of physical particular phenomenon is based on knowledge of the phenomenon. This knowledge helps us to conceptualize this phenomenon around different models. Hidden Markov Models (HMM) can be used for modeling complex processes. This kind of models is used as tool for fault diagnosis systems. Nowadays, industrial robots living in stochastic environment need faults detection to prevent any breakdown. In this paper, we wish to evaluate relevance of Hidden Markov Models parameters, without a priori knowledges. After a brief introduction of Hidden Markov Model, we present the most used selection criteria of models in current literature and some methods to evaluate relevance of stochastic events resulting from Hidden Markov Models. We support our study by an example of simulated industrial process by using synthetic model of Vrignat's study (Vrignat 2010). Therefore, we evaluate output parameters of the various tested models on this process, for finally come up with the most relevant model

    Dynamical complexity of short and noisy time series: Compression-Complexity vs. Shannon entropy

    Get PDF
    Shannon entropy has been extensively used for characteriz- ing complexity of time series arising from chaotic dynamical systems and stochastic processes such as Markov chains. However, for short and noisy time series, Shannon entropy performs poorly. Complexity measures which are based on lossless compression algorithms are a good substitute in such scenarios. We evaluate the performance of two such Compression-Complexity Measures namely Lempel-Ziv complexity(LZ)andEffort-To-Compress( ETC)onshorttimeseriesfrom chaoticdynamicalsystemsinthepresenceofnoise.Both LZ and ETC outperform Shannon entropy (H) in accurately characterizing the dynamical complexity of such systems. For very short binary sequences (which arise in neuroscience applications), ETC has higher number of distinct complexity values than LZ and H, thus enabling a finer resolution. For two-state ergodic Markov chains, we empirically show that ETC converges to a steady state value faster than LZ. Compression-Complexity measures are promising for applications which involve short and noisy time series

    The Influence the Training Set Size Has on the Performance of a Digit Speech Recognition System in Macedonian

    No full text

    From Information to Intelligence: The Role of Relative Significance in Decision Making and Inference

    No full text

    Cyclic Viterbi Score for Linear Hidden Markov Models

    No full text

    A Simple But Effective Approach to Speaker Tracking in Broadcast News

    No full text

    Hybrid Support Vector Machine and General Model Approach for Audio Classification

    No full text
    corecore