3,621 research outputs found

    Computational methods for hidden Markov tree models - An application to wavelet trees.

    Get PDF
    http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1323262International audienceThe hidden Markov tree models were introduced by Crouse et al. in 1998 for modeling nonindependent, non-Gaussian wavelet transform coefficients. In their paper, they developed the equivalent of the forward-backward algorithm for hidden Markov tree models and called it the 'upward-downward algorithm'. This algorithm is subject to the same numerical limitations as the forward-backward algorithm for hidden Markov chains (HMCs). In this paper, adapting the ideas of Devijver from 1985, we propose a new 'upward-downward' algorithm, which is a true smoothing algorithm and is immune to numerical underflow. Furthermore, we propose a Viterbi-like algorithm for global restoration of the hidden state tree. The contribution of those algorithms as diagnosis tools is illustrated through the modeling of statistical dependencies between wavelet coefficients with a special emphasis on local regularity changes

    Computational Methods for Hidden Markov Tree Models—An Application to Wavelet Trees

    Full text link

    Learning Tree Distributions by Hidden Markov Models

    Full text link
    Hidden tree Markov models allow learning distributions for tree structured data while being interpretable as nondeterministic automata. We provide a concise summary of the main approaches in literature, focusing in particular on the causality assumptions introduced by the choice of a specific tree visit direction. We will then sketch a novel non-parametric generalization of the bottom-up hidden tree Markov model with its interpretation as a nondeterministic tree automaton with infinite states.Comment: Accepted in LearnAut2018 worksho

    Latent tree models

    Full text link
    Latent tree models are graphical models defined on trees, in which only a subset of variables is observed. They were first discussed by Judea Pearl as tree-decomposable distributions to generalise star-decomposable distributions such as the latent class model. Latent tree models, or their submodels, are widely used in: phylogenetic analysis, network tomography, computer vision, causal modeling, and data clustering. They also contain other well-known classes of models like hidden Markov models, Brownian motion tree model, the Ising model on a tree, and many popular models used in phylogenetics. This article offers a concise introduction to the theory of latent tree models. We emphasise the role of tree metrics in the structural description of this model class, in designing learning algorithms, and in understanding fundamental limits of what and when can be learned

    WARP: Wavelets with adaptive recursive partitioning for multi-dimensional data

    Full text link
    Effective identification of asymmetric and local features in images and other data observed on multi-dimensional grids plays a critical role in a wide range of applications including biomedical and natural image processing. Moreover, the ever increasing amount of image data, in terms of both the resolution per image and the number of images processed per application, requires algorithms and methods for such applications to be computationally efficient. We develop a new probabilistic framework for multi-dimensional data to overcome these challenges through incorporating data adaptivity into discrete wavelet transforms, thereby allowing them to adapt to the geometric structure of the data while maintaining the linear computational scalability. By exploiting a connection between the local directionality of wavelet transforms and recursive dyadic partitioning on the grid points of the observation, we obtain the desired adaptivity through adding to the traditional Bayesian wavelet regression framework an additional layer of Bayesian modeling on the space of recursive partitions over the grid points. We derive the corresponding inference recipe in the form of a recursive representation of the exact posterior, and develop a class of efficient recursive message passing algorithms for achieving exact Bayesian inference with a computational complexity linear in the resolution and sample size of the images. While our framework is applicable to a range of problems including multi-dimensional signal processing, compression, and structural learning, we illustrate its work and evaluate its performance in the context of 2D and 3D image reconstruction using real images from the ImageNet database. We also apply the framework to analyze a data set from retinal optical coherence tomography

    Localizing the Latent Structure Canonical Uncertainty: Entropy Profiles for Hidden Markov Models

    Get PDF
    This report addresses state inference for hidden Markov models. These models rely on unobserved states, which often have a meaningful interpretation. This makes it necessary to develop diagnostic tools for quantification of state uncertainty. The entropy of the state sequence that explains an observed sequence for a given hidden Markov chain model can be considered as the canonical measure of state sequence uncertainty. This canonical measure of state sequence uncertainty is not reflected by the classic multivariate state profiles computed by the smoothing algorithm, which summarizes the possible state sequences. Here, we introduce a new type of profiles which have the following properties: (i) these profiles of conditional entropies are a decomposition of the canonical measure of state sequence uncertainty along the sequence and makes it possible to localize this uncertainty, (ii) these profiles are univariate and thus remain easily interpretable on tree structures. We show how to extend the smoothing algorithms for hidden Markov chain and tree models to compute these entropy profiles efficiently.Comment: Submitted to Journal of Machine Learning Research; No RR-7896 (2012

    Improving fusion of surveillance images in sensor networks using independent component analysis

    Get PDF

    A Max-Product EM Algorithm for Reconstructing Markov-tree Sparse Signals from Compressive Samples

    Full text link
    We propose a Bayesian expectation-maximization (EM) algorithm for reconstructing Markov-tree sparse signals via belief propagation. The measurements follow an underdetermined linear model where the regression-coefficient vector is the sum of an unknown approximately sparse signal and a zero-mean white Gaussian noise with an unknown variance. The signal is composed of large- and small-magnitude components identified by binary state variables whose probabilistic dependence structure is described by a Markov tree. Gaussian priors are assigned to the signal coefficients given their state variables and the Jeffreys' noninformative prior is assigned to the noise variance. Our signal reconstruction scheme is based on an EM iteration that aims at maximizing the posterior distribution of the signal and its state variables given the noise variance. We construct the missing data for the EM iteration so that the complete-data posterior distribution corresponds to a hidden Markov tree (HMT) probabilistic graphical model that contains no loops and implement its maximization (M) step via a max-product algorithm. This EM algorithm estimates the vector of state variables as well as solves iteratively a linear system of equations to obtain the corresponding signal estimate. We select the noise variance so that the corresponding estimated signal and state variables obtained upon convergence of the EM iteration have the largest marginal posterior distribution. We compare the proposed and existing state-of-the-art reconstruction methods via signal and image reconstruction experiments.Comment: To appear in IEEE Transactions on Signal Processin
    • …
    corecore