3,632 research outputs found

    Accuracy of MAP segmentation with hidden Potts and Markov mesh prior models via Path Constrained Viterbi Training, Iterated Conditional Modes and Graph Cut based algorithms

    Full text link
    In this paper, we study statistical classification accuracy of two different Markov field environments for pixelwise image segmentation, considering the labels of the image as hidden states and solving the estimation of such labels as a solution of the MAP equation. The emission distribution is assumed the same in all models, and the difference lays in the Markovian prior hypothesis made over the labeling random field. The a priori labeling knowledge will be modeled with a) a second order anisotropic Markov Mesh and b) a classical isotropic Potts model. Under such models, we will consider three different segmentation procedures, 2D Path Constrained Viterbi training for the Hidden Markov Mesh, a Graph Cut based segmentation for the first order isotropic Potts model, and ICM (Iterated Conditional Modes) for the second order isotropic Potts model. We provide a unified view of all three methods, and investigate goodness of fit for classification, studying the influence of parameter estimation, computational gain, and extent of automation in the statistical measures Overall Accuracy, Relative Improvement and Kappa coefficient, allowing robust and accurate statistical analysis on synthetic and real-life experimental data coming from the field of Dental Diagnostic Radiography. All algorithms, using the learned parameters, generate good segmentations with little interaction when the images have a clear multimodal histogram. Suboptimal learning proves to be frail in the case of non-distinctive modes, which limits the complexity of usable models, and hence the achievable error rate as well. All Matlab code written is provided in a toolbox available for download from our website, following the Reproducible Research Paradigm

    On segmentation with Markovian models

    Get PDF
    This paper addresses the image modeling problem under the assumption that images can be represented by 2d order, hidden Markov random fields models. The modeling applications we have in mind com- prise pixelwise segmentation of gray-level images coming from the field of Oral Radiographic Differential Diagnosis. Segmentation is achieved by approximations to the solution of the maximum a posteriori equation (MAP) when the emission distribution is assumed the same in all models and the difference lays in the Neighborhood Markovian hypothesis made over the labeling random field. For two algorithms, 2d path-constrained Viterbi training and Potts-ICM we investigate goodness of fit by study- ing statistical complexity, computational gain, extent of automation, and rate of classification measured with kappa statistic. All code written is provided in a Matlab toolbox available for download from our website, following the Reproducible Research Paradigm.Sociedad Argentina de Informática e Investigación Operativ

    Recovering the state sequence of hidden Markov models using mean-field approximations

    Full text link
    Inferring the sequence of states from observations is one of the most fundamental problems in Hidden Markov Models. In statistical physics language, this problem is equivalent to computing the marginals of a one-dimensional model with a random external field. While this task can be accomplished through transfer matrix methods, it becomes quickly intractable when the underlying state space is large. This paper develops several low-complexity approximate algorithms to address this inference problem when the state space becomes large. The new algorithms are based on various mean-field approximations of the transfer matrix. Their performances are studied in detail on a simple realistic model for DNA pyrosequencing.Comment: 43 pages, 41 figure

    On the Performance of Short Block Codes over Finite-State Channels in the Rare-Transition Regime

    Full text link
    As the mobile application landscape expands, wireless networks are tasked with supporting different connection profiles, including real-time traffic and delay-sensitive communications. Among many ensuing engineering challenges is the need to better understand the fundamental limits of forward error correction in non-asymptotic regimes. This article characterizes the performance of random block codes over finite-state channels and evaluates their queueing performance under maximum-likelihood decoding. In particular, classical results from information theory are revisited in the context of channels with rare transitions, and bounds on the probabilities of decoding failure are derived for random codes. This creates an analysis framework where channel dependencies within and across codewords are preserved. Such results are subsequently integrated into a queueing problem formulation. For instance, it is shown that, for random coding on the Gilbert-Elliott channel, the performance analysis based on upper bounds on error probability provides very good estimates of system performance and optimum code parameters. Overall, this study offers new insights about the impact of channel correlation on the performance of delay-aware, point-to-point communication links. It also provides novel guidelines on how to select code rates and block lengths for real-time traffic over wireless communication infrastructures

    Deep Markov Random Field for Image Modeling

    Full text link
    Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fully-connected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feed-forward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feed-forward approximation also allows it to be efficiently learned from data. Experimental results on a variety of low-level vision tasks show notable improvement over state-of-the-arts.Comment: Accepted at ECCV 201

    A Unifying review of linear gaussian models

    Get PDF
    Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models

    Nonuniform Markov models

    Full text link
    A statistical language model assigns probability to strings of arbitrary length. Unfortunately, it is not possible to gather reliable statistics on strings of arbitrary length from a finite corpus. Therefore, a statistical language model must decide that each symbol in a string depends on at most a small, finite number of other symbols in the string. In this report we propose a new way to model conditional independence in Markov models. The central feature of our nonuniform Markov model is that it makes predictions of varying lengths using contexts of varying lengths. Experiments on the Wall Street Journal reveal that the nonuniform model performs slightly better than the classic interpolated Markov model. This result is somewhat remarkable because both models contain identical numbers of parameters whose values are estimated in a similar manner. The only difference between the two models is how they combine the statistics of longer and shorter strings. Keywords: nonuniform Markov model, interpolated Markov model, conditional independence, statistical language model, discrete time series.Comment: 17 page
    • …
    corecore