20 research outputs found

    Minimum Conditional Description Length Estimation for Markov Random Fields

    Full text link
    In this paper we discuss a method, which we call Minimum Conditional Description Length (MCDL), for estimating the parameters of a subset of sites within a Markov random field. We assume that the edges are known for the entire graph G=(V,E)G=(V,E). Then, for a subset U⊂VU\subset V, we estimate the parameters for nodes and edges in UU as well as for edges incident to a node in UU, by finding the exponential parameter for that subset that yields the best compression conditioned on the values on the boundary ∂U\partial U. Our estimate is derived from a temporally stationary sequence of observations on the set UU. We discuss how this method can also be applied to estimate a spatially invariant parameter from a single configuration, and in so doing, derive the Maximum Pseudo-Likelihood (MPL) estimate.Comment: Information Theory and Applications (ITA) workshop, February 201

    Row-Centric Lossless Compression of Markov Images

    Full text link
    Motivated by the question of whether the recently introduced Reduced Cutset Coding (RCC) offers rate-complexity performance benefits over conventional context-based conditional coding for sources with two-dimensional Markov structure, this paper compares several row-centric coding strategies that vary in the amount of conditioning as well as whether a model or an empirical table is used in the encoding of blocks of rows. The conclusion is that, at least for sources exhibiting low-order correlations, 1-sided model-based conditional coding is superior to the method of RCC for a given constraint on complexity, and conventional context-based conditional coding is nearly as good as the 1-sided model-based coding.Comment: submitted to ISIT 201

    Cutset Width and Spacing for Reduced Cutset Coding of Markov Random Fields

    Full text link
    In this paper we explore tradeoffs, regarding coding performance, between the thickness and spacing of the cutset used in Reduced Cutset Coding (RCC) of a Markov random field image model \cite{reyes2010}. Considering MRF models on a square lattice of sites, we show that under a stationarity condition, increasing the thickness of the cutset reduces coding rate for the cutset, increasing the spacing between components of the cutset increases the coding rate of the non-cutset pixels, though the coding rate of the latter is always strictly less than that of the former. We show that the redundancy of RCC can be decomposed into two terms, a correlation redundancy due to coding the components of the cutset independently, and a distribution redundancy due to coding the cutset as a reduced MRF. We provide analysis of these two sources of redundancy. We present results from numerical simulations with a homogeneous Ising model that bear out the analytical results. We also present a consistent estimation algorithm for the moment-matching reduced MRF for the cutset.http://deepblue.lib.umich.edu/bitstream/2027.42/117382/1/mgreyes_dlneuhoff_ISIT_2016_fullpaper_deepblue.pdfDescription of mgreyes_dlneuhoff_ISIT_2016_fullpaper_deepblue.pdf : articl

    Lecture Notes on Network Information Theory

    Full text link
    These lecture notes have been converted to a book titled Network Information Theory published recently by Cambridge University Press. This book provides a significantly expanded exposition of the material in the lecture notes as well as problems and bibliographic notes at the end of each chapter. The authors are currently preparing a set of slides based on the book that will be posted in the second half of 2012. More information about the book can be found at http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/

    Manhattan Cutset Sampling and Sensor Networks.

    Full text link
    Cutset sampling is a new approach to acquiring two-dimensional data, i.e., images, where values are recorded densely along straight lines. This type of sampling is motivated by physical scenarios where data must be taken along straight paths, such as a boat taking water samples. Additionally, it may be possible to better reconstruct image edges using the dense amount of data collected on lines. Finally, an advantage of cutset sampling is in the design of wireless sensor networks. If battery-powered sensors are placed densely along straight lines, then the transmission energy required for communication between sensors can be reduced, thereby extending the network lifetime. A special case of cutset sampling is Manhattan sampling, where data is recorded along evenly-spaced rows and columns. This thesis examines Manhattan sampling in three contexts. First, we prove a sampling theorem demonstrating an image can be perfectly reconstructed from Manhattan samples when its spectrum is bandlimited to the union of two Nyquist regions corresponding to the two lattices forming the Manhattan grid. An efficient ``onion peeling'' reconstruction method is provided, and we show that the Landau bound is achieved. This theorem is generalized to dimensions higher than two, where again signals are reconstructable from a Manhattan set if they are bandlimited to a union of Nyquist regions. Second, for non-bandlimited images, we present several algorithms for reconstructing natural images from Manhattan samples. The Locally Orthogonal Orientation Penalization (LOOP) algorithm is the best of the proposed algorithms in both subjective quality and mean-squared error. The LOOP algorithm reconstructs images well in general, and outperforms competing algorithms for reconstruction from non-lattice samples. Finally, we study cutset networks, which are new placement topologies for wireless sensor networks. Assuming a power-law model for communication energy, we show that cutset networks offer reduced communication energy costs over lattice and random topologies. Additionally, when solving centralized and decentralized source localization problems, cutset networks offer reduced energy costs over other topologies for fixed sensor densities and localization accuracies. Finally, with the eventual goal of analyzing different cutset topologies, we analyze the energy per distance required for efficient long-distance communication in lattice networks.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120876/1/mprelee_1.pd

    Information-theoretic limitations of distributed information processing

    Get PDF
    In a generic distributed information processing system, a number of agents connected by communication channels aim to accomplish a task collectively through local communications. The fundamental limits of distributed information processing problems depend not only on the intrinsic difficulty of the task, but also on the communication constraints due to the distributedness. In this thesis, we reveal these dependencies quantitatively under information-theoretic frameworks. We consider three typical distributed information processing problems: decentralized parameter estimation, distributed function computation, and statistical learning under adaptive composition. For the first two problems, we derive converse results on the Bayes risk and the computation time, respectively. For the last problem, we first study the relationship between the generalization capability of a learning algorithm and its stability property measured by the mutual information between its input and output, and then derive achievability results on the generalization error of adaptively composed learning algorithms. In all cases, we obtain general results on the fundamental limits with respect to a general model of the problem, so that the results can be applied to various specific scenarios. Our information-theoretic analyses also provide general approaches to inferring global properties of a distributed information processing system from local properties of its components

    Perceptual Image Similarity Metrics and Applications.

    Full text link
    This dissertation presents research in perceptual image similarity metrics and applications, e.g., content-based image retrieval, perceptual image compression, image similarity assessment and texture analysis. The first part aims to design texture similarity metrics consistent with human perception. A new family of statistical texture similarity features, called Local Radius Index (LRI), and corresponding similarity metrics are proposed. Compared to state-of-the-art metrics in the STSIM family, LRI-based metrics achieve better texture retrieval performance with much less computation. When applied to the recently developed perceptual image coder, Matched Texture Coding (MTC), they enable similar performance while significantly accelerating encoding. Additionally, in photographic paper classification, LRI-based metrics also outperform pre-existing metrics. To fulfill the needs of texture classification and other applications, a rotation-invariant version of LRI, called Rotation-Invariant Local Radius Index (RI-LRI), is proposed. RI-LRI is also grayscale and illuminance insensitive. The corresponding similarity metric achieves texture classification accuracy comparable to state-of-the-art metrics. Moreover, its much lower dimensional feature vector requires substantially less computation and storage than other state-of-the-art texture features. The second part of the dissertation focuses on bilevel images, which are images whose pixels are either black or white. The contributions include new objective similarity metrics intended to quantify similarity consistent with human perception, and a subjective experiment to obtain ground truth for judging the performance of objective metrics. Several similarity metrics are proposed that outperform existing ones in the sense of attaining significantly higher Pearson and Spearman-rank correlations with the ground truth. The new metrics include Adjusted Percentage Error, Bilevel Gradient Histogram, Connected Components Comparison and combinations of such. Another portion of the dissertation focuses on the aforementioned MTC, which is a block-based image coder that uses texture similarity metrics to decide if blocks of the image can be encoded by pointing to perceptually similar ones in the already coded region. The key to its success is an effective texture similarity metric, such as an LRI-based metric, and an effective search strategy. Compared to traditional image compression algorithms, e.g., JPEG, MTC achieves similar coding rate with higher reconstruction quality. And the advantage of MTC becomes larger as coding rate decreases.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113586/1/yhzhai_1.pd

    Generalized asset integrity games

    Get PDF
    Generalized assets represent a class of multi-scale adaptive state-transition systems with domain-oblivious performance criteria. The governance of such assets must proceed without exact specifications, objectives, or constraints. Decision making must rapidly scale in the presence of uncertainty, complexity, and intelligent adversaries. This thesis formulates an architecture for generalized asset planning. Assets are modelled as dynamical graph structures which admit topological performance indicators, such as dependability, resilience, and efficiency. These metrics are used to construct robust model configurations. A normalized compression distance (NCD) is computed between a given active/live asset model and a reference configuration to produce an integrity score. The utility derived from the asset is monotonically proportional to this integrity score, which represents the proximity to ideal conditions. The present work considers the situation between an asset manager and an intelligent adversary, who act within a stochastic environment to control the integrity state of the asset. A generalized asset integrity game engine (GAIGE) is developed, which implements anytime algorithms to solve a stochastically perturbed two-player zero-sum game. The resulting planning strategies seek to stabilize deviations from minimax trajectories of the integrity score. Results demonstrate the performance and scalability of the GAIGE. This approach represents a first-step towards domain-oblivious architectures for complex asset governance and anytime planning

    An extensive English language bibliography on graph theory and its applications

    Get PDF
    Bibliography on graph theory and its application
    corecore