36,853 research outputs found

    Learning in Gaussian Markov random fields

    Get PDF
    This paper addresses the problem of state estimation in the case where the prior distribution of the states is not perfectly known but instead is parameterized by some unknown parameter. Thus in order to support the state estimator with prior information on the states and improve the quality of the state estimates, it is necessary to learn this unknown parameter first. Here we assume a parameterized Gaussian Markov random field to model the prior distribution of the states and propose an algorithm that is able to learn its parameters from given observations on these states. The effectiveness of this approach is proven experimentally by simulations

    Lower Bounds for Two-Sample Structural Change Detection in Ising and Gaussian Models

    Full text link
    The change detection problem is to determine if the Markov network structures of two Markov random fields differ from one another given two sets of samples drawn from the respective underlying distributions. We study the trade-off between the sample sizes and the reliability of change detection, measured as a minimax risk, for the important cases of the Ising models and the Gaussian Markov random fields restricted to the models which have network structures with pp nodes and degree at most dd, and obtain information-theoretic lower bounds for reliable change detection over these models. We show that for the Ising model, Ω(d2(logd)2logp)\Omega\left(\frac{d^2}{(\log d)^2}\log p\right) samples are required from each dataset to detect even the sparsest possible changes, and that for the Gaussian, Ω(γ2log(p))\Omega\left( \gamma^{-2} \log(p)\right) samples are required from each dataset to detect change, where γ\gamma is the smallest ratio of off-diagonal to diagonal terms in the precision matrices of the distributions. These bounds are compared to the corresponding results in structure learning, and closely match them under mild conditions on the model parameters. Thus, our change detection bounds inherit partial tightness from the structure learning schemes in previous literature, demonstrating that in certain parameter regimes, the naive structure learning based approach to change detection is minimax optimal up to constant factors.Comment: Presented at the 55th Annual Allerton Conference on Communication, Control, and Computing, Oct. 201

    Markov Network Structure Learning via Ensemble-of-Forests Models

    Full text link
    Real world systems typically feature a variety of different dependency types and topologies that complicate model selection for probabilistic graphical models. We introduce the ensemble-of-forests model, a generalization of the ensemble-of-trees model. Our model enables structure learning of Markov random fields (MRF) with multiple connected components and arbitrary potentials. We present two approximate inference techniques for this model and demonstrate their performance on synthetic data. Our results suggest that the ensemble-of-forests approach can accurately recover sparse, possibly disconnected MRF topologies, even in presence of non-Gaussian dependencies and/or low sample size. We applied the ensemble-of-forests model to learn the structure of perturbed signaling networks of immune cells and found that these frequently exhibit non-Gaussian dependencies with disconnected MRF topologies. In summary, we expect that the ensemble-of-forests model will enable MRF structure learning in other high dimensional real world settings that are governed by non-trivial dependencies.Comment: 13 pages, 6 figure

    Bayesian Inference of Log Determinants

    Full text link
    The log-determinant of a kernel matrix appears in a variety of machine learning problems, ranging from determinantal point processes and generalized Markov random fields, through to the training of Gaussian processes. Exact calculation of this term is often intractable when the size of the kernel matrix exceeds a few thousand. In the spirit of probabilistic numerics, we reinterpret the problem of computing the log-determinant as a Bayesian inference problem. In particular, we combine prior knowledge in the form of bounds from matrix theory and evidence derived from stochastic trace estimation to obtain probabilistic estimates for the log-determinant and its associated uncertainty within a given computational budget. Beyond its novelty and theoretic appeal, the performance of our proposal is competitive with state-of-the-art approaches to approximating the log-determinant, while also quantifying the uncertainty due to budget-constrained evidence.Comment: 12 pages, 3 figure
    corecore