4 research outputs found
Localizing the Latent Structure Canonical Uncertainty: Entropy Profiles for Hidden Markov Models
This report addresses state inference for hidden Markov models. These models
rely on unobserved states, which often have a meaningful interpretation. This
makes it necessary to develop diagnostic tools for quantification of state
uncertainty. The entropy of the state sequence that explains an observed
sequence for a given hidden Markov chain model can be considered as the
canonical measure of state sequence uncertainty. This canonical measure of
state sequence uncertainty is not reflected by the classic multivariate state
profiles computed by the smoothing algorithm, which summarizes the possible
state sequences. Here, we introduce a new type of profiles which have the
following properties: (i) these profiles of conditional entropies are a
decomposition of the canonical measure of state sequence uncertainty along the
sequence and makes it possible to localize this uncertainty, (ii) these
profiles are univariate and thus remain easily interpretable on tree
structures. We show how to extend the smoothing algorithms for hidden Markov
chain and tree models to compute these entropy profiles efficiently.Comment: Submitted to Journal of Machine Learning Research; No RR-7896 (2012
A generalized risk approach to path inference based on hidden Markov models
Motivated by the unceasing interest in hidden Markov models (HMMs), this
paper re-examines hidden path inference in these models, using primarily a
risk-based framework. While the most common maximum a posteriori (MAP), or
Viterbi, path estimator and the minimum error, or Posterior Decoder (PD), have
long been around, other path estimators, or decoders, have been either only
hinted at or applied more recently and in dedicated applications generally
unfamiliar to the statistical learning community. Over a decade ago, however, a
family of algorithmically defined decoders aiming to hybridize the two standard
ones was proposed (Brushe et al., 1998). The present paper gives a careful
analysis of this hybridization approach, identifies several problems and issues
with it and other previously proposed approaches, and proposes practical
resolutions of those. Furthermore, simple modifications of the classical
criteria for hidden path recognition are shown to lead to a new class of
decoders. Dynamic programming algorithms to compute these decoders in the usual
forward-backward manner are presented. A particularly interesting subclass of
such estimators can be also viewed as hybrids of the MAP and PD estimators.
Similar to previously proposed MAP-PD hybrids, the new class is parameterized
by a small number of tunable parameters. Unlike their algorithmic predecessors,
the new risk-based decoders are more clearly interpretable, and, most
importantly, work "out of the box" in practice, which is demonstrated on some
real bioinformatics tasks and data. Some further generalizations and
applications are discussed in conclusion.Comment: Section 5: corrected denominators of the scaled beta variables (pp.
27-30), => corrections in claims 1, 3, Prop. 12, bottom of Table 1. Decoder
(49), Corol. 14 are generalized to handle 0 probabilities. Notation is more
closely aligned with (Bishop, 2006). Details are inserted in eqn-s (43); the
positivity assumption in Prop. 11 is explicit. Fixed typing errors in
equation (41), Example