61 research outputs found

    Two New Notions of Abduction in Bayesian Networks

    Get PDF
    Contains fulltext : 83743.pdf (preprint version ) (Open Access)BNAIC 2010, 25 oktober 201

    The parameterized complexity of approximate inference in Bayesian networks

    Get PDF
    Contains fulltext : 160422.pdf (publisher's version ) (Open Access)Computing posterior and marginal probabilities constitutes the backbone of almost all inferences in Bayesian networks. These computations are known to be intractable in general, both to compute exactly and to approximate by sampling algorithms. While it is well known under what constraints exact computation can be rendered tractable (viz., bounding tree-width of the moralized network and bounding the cardinality of the variables) it is less known under what constraints approximate Bayesian inference can be tractable. Here, we use the formal framework of fixed-error randomized tractability (a randomized analogue of fixed-parameter tractability) to address this problem, both by re-interpreting known results from the literature and providing some additional new results, including results on fixed parameter tractable de-randomization of approximate inference.12 p

    Most Probable Explanations in Bayesian Networks: complexity and tractability

    Get PDF
    Contains fulltext : 83894.pdf (publisher's version ) (Open Access)20 p1 p

    The Computational Complexity of Probabilistic Networks

    No full text
    In this thesis, the computational complexity of a number of problems related to probabilistic networks is studied that combine probabilistic inference, finding, verifying, and enumerating solutions. In particular parameter tuning, sensitivity analysis, monotonicity, enumerating solutions, and problems related to qualitative abstractions of probabilistic networks are studied. These problems are not ‘merely’ NP-hard, but are complete for a variety of complexity classes in the Counting Hierarchy (CH). It is shown that these problems often remain hard under a number of constraints on the problem structure, e.g., when the treewidth of the network is bounded. This suggests, that practical applications must restrict themselves to limited degrees of freedom (e.g. a restricted number of parameters to tune or variables to determine monotonicity constraints on) in order to be tractable. Some of the problems are complete for complexity classes that have no other ‘real world’ complete problems and may be interested also from a complexity-theoretical point of view

    Tree-Width and the Computational Complexity of MAP Approximations in Bayesian Networks

    No full text
    The problem of finding the most probable explanation to a designated set of variables given partial evidence (the MAP problem) is a notoriously intractable problem in Bayesian networks, both to compute exactly and to approximate. It is known, both from theoretical considerations and from practical experience, that low tree-width is typically an essential prerequisite to efficient exact computations in Bayesian networks. In this paper we investigate whether the same holds for approximating MAP. We define four notions of approximating MAP (by value, structure, rank, and expectation) and argue that all of them are intractable in general. We prove that efficient value-approximations, structure-approximations, and rank-approximations of MAP instances with high tree-width will violate the Exponential Time Hypothesis. In contrast, we show that MAP can sometimes be efficiently expectation-approximated, even in instances with high tree-width, if the most probable explanation has a high probability. We introduce the complexity class FERT, analogous to the class FPT, to capture this notion of fixed-parameter expectation-approximability. We suggest a road-map to future research that yields fixed-parameter tractable results for expectation-approximate MAP, even in graphs with high tree-width

    Structure Approximation of Most Probable Explanations in Bayesian Networks

    No full text
    Item does not contain fulltextTypically, when one discusses approximation algorithms for (NP-hard) problems (like Traveling Salesperson, Vertex Cover, Knapsack), one refers to algorithms that return a solution whose value is (at least ideally) close to optimal; e.g., a tour with almost minimal length, a vertex cover of size just above minimal, or a collection of objects that has close to maximal value. In contrast, one might also be interested in approximation algorithms that return solutions that resemble the optimal solutions, i.e., whose structure is akin to the optimal solution, like a tour that is almost similar to the optimal tour, a vertex cover that differs in only a few vertices from the optimal cover, or a collection that is similar to the optimal collection. In this paper, we discuss structure-approximation of the problem of finding the most probable explanation of observations in Bayesian networks, i.e., finding a joint value assignment that looks like the most probable one, rather than has an almost as high value. We show that it is NP-hard to obtain the value of just a single variable of the most probable explanation. However, when partial orders on the values of the variables are available, we can improve on these results

    The Computational Complexity of Probabilistic Inference

    No full text

    Approximate inference in Bayesian networks: Parameterized complexity results

    No full text
    Contains fulltext : 182072.pdf (publisher's version ) (Closed access)Computing posterior and marginal probabilities constitutes the backbone of almost all inferences in Bayesian networks. These computations are known to be intractable in general, both to compute exactly and to approximate (e.g., by sampling algorithms). While it is well known under what constraints exact computation can be rendered tractable (viz., bounding tree-width of the moralized network and bounding the cardinality of the variables) it is less known under what constraints approximate Bayesian inference can be tractable. Here, we extend the existing formal framework of fixed-error randomized tractability (a randomized analogue of fixed-parameter tractability), and use it to address this problem, both by re-interpreting known results from the literature and by providing some additional new results, including results on fixed parameter tractable de-randomization of approximate inference.13 p

    A Formal Theory of Relevancy in Problem Solving

    No full text

    Two new notions of abduction in Bayesian networks

    No full text
    • …
    corecore