30,821 research outputs found
Learning Tree Distributions by Hidden Markov Models
Hidden tree Markov models allow learning distributions for tree structured
data while being interpretable as nondeterministic automata. We provide a
concise summary of the main approaches in literature, focusing in particular on
the causality assumptions introduced by the choice of a specific tree visit
direction. We will then sketch a novel non-parametric generalization of the
bottom-up hidden tree Markov model with its interpretation as a
nondeterministic tree automaton with infinite states.Comment: Accepted in LearnAut2018 worksho
Recommended from our members
Particle Gibbs for Infinite Hidden Markov Models
This is the final version of the article. It first appeared from Curran Associates via http://papers.nips.cc/paper/5968-particle-gibbs-for-infinite-hidden-markov-modelsInfinite Hidden Markov Models (iHMM’s) are an attractive, nonparametric generalization of the classical Hidden Markov Model which can automatically infer the number of hidden states in the system. However, due to the infinite-dimensional nature of the transition dynamics, performing inference in the iHMM is difficult. In this paper, we present an infinite-state Particle Gibbs (PG) algorithm to resample state trajectories for the iHMM. The proposed algorithm uses an efficient proposal optimized for iHMMs and leverages ancestor sampling to improve the mixing of the standard PG algorithm. Our algorithm demonstrates significant convergence improvements on synthetic and real world data sets
Informational and Causal Architecture of Discrete-Time Renewal Processes
Renewal processes are broadly used to model stochastic behavior consisting of
isolated events separated by periods of quiescence, whose durations are
specified by a given probability law. Here, we identify the minimal sufficient
statistic for their prediction (the set of causal states), calculate the
historical memory capacity required to store those states (statistical
complexity), delineate what information is predictable (excess entropy), and
decompose the entropy of a single measurement into that shared with the past,
future, or both. The causal state equivalence relation defines a new subclass
of renewal processes with a finite number of causal states despite having an
unbounded interevent count distribution. We use these formulae to analyze the
output of the parametrized Simple Nonunifilar Source, generated by a simple
two-state hidden Markov model, but with an infinite-state epsilon-machine
presentation. All in all, the results lay the groundwork for analyzing
processes with infinite statistical complexity and infinite excess entropy.Comment: 18 pages, 9 figures, 1 table;
http://csc.ucdavis.edu/~cmg/compmech/pubs/dtrp.ht
Modeling U.S. Inflation Dynamics: A Bayesian Nonparametric Approach
This paper uses an infinite hidden Markov model (IHMM) to analyze U.S. inflation dynamics with a particular focus on the persistence of inflation. The IHMM is a Bayesian nonparametric approach to modeling structural breaks. It allows for an unknown number of breakpoints and is a flexible and attractive alternative to existing methods. We found a clear structural break during the recent financial crisis. Prior to that, inflation persistence was high and fairly constant.inflation dynamics, hierarchical Dirichlet process, IHMM, structural breaks, Bayesian nonparametrics
Modeling U.S. Inflation Dynamics: A Bayesian Nonparametric Approach
This paper uses an infinite hidden Markov model (IHMM) to analyze U.S. inflation dynamics with a particular focus on the persistence of inflation. The IHMM is a Bayesian nonparametric approach to modeling structural breaks. It allows for an unknown number of breakpoints and is a flexible and attractive alternative to existing methods. We found a clear structural break during the recent financial crisis. Prior to that, inflation persistence was high and fairly constant.inflation dynamics, hierarchical Dirichlet process, IHMM, structural breaks, Bayesian nonparametrics
HMM-MIO: An enhanced hidden Markov model for action recognition
Generative models can be flexibly employed in a variety of tasks such as classification, detection and segmentation thanks to their explicit modelling of likelihood functions. However, likelihood functions are hard to model accurately in many real cases. In this paper, we present an enhanced hidden Markov model capable of dealing with the noisy, high-dimensional and sparse measurements typical of action feature sets. The modified model, named hidden Markov model with multiple, independent observations (HMM-MIO), joins: a) robustness to observation outliers, b) dimensionality reduction, and c) processing of sparse observations. In the paper, a set of experimental results over the Weizmann and KTH datasets shows that this model can be tuned to achieve classification accuracy comparable to that of discriminative classifiers. While discriminative approaches remain the natural choice for classification tasks, our results prove that likelihoods, too, can be modelled to a high level of accuracy. In the near future, we plan extension of HMM-MIO along the lines of infinite Markov models and its integration into a switching model for continuous human action recognition. © 2011 IEEE
The infinite Viterbi alignment and decay-convexity
The infinite Viterbi alignment is the limiting maximum a-posteriori estimate of the unobserved path in a hidden Markov model as the length of the time horizon grows. For models on state-space Rd satisfying a new “decay-convexity” condition, we develop an approach to existence of the infinite Viterbi alignment in an infinite dimensional Hilbert space. Quantitative bounds on the distance to the Viterbi process, which are the first of their kind, are derived and used to illustrate how approximate estimation via parallelization can be accurate and scaleable to high-dimensional problems because the rate of convergence to the infinite Viterbi alignment does not necessarily depend on d. The results are applied to approximate estimation via parallelization and a model of neural population activity
- …