8 research outputs found

    Treewidth in Non-Ground Answer Set Solving and Alliance Problems in Graphs

    Get PDF
    To solve hard problems efficiently via answer set programming (ASP), a promising approach is to take advantage of the fact that real-world instances of many hard problems exhibit small treewidth. Algorithms that exploit this have already been proposed -- however, they suffer from an enormous overhead. In the thesis, we present improvements in the algorithmic methodology for leveraging bounded treewidth that are especially targeted toward problems involving subset minimization. This can be useful for many problems at the second level of the polynomial hierarchy like solving disjunctive ground ASP. Moreover, we define classes of non-ground ASP programs such that grounding such a program together with input facts does not lead to an excessive increase in treewidth of the resulting ground program when compared to the treewidth of the input. This allows ASP users to take advantage of the fact that state-of-the-art ASP solvers perform better on ground programs of small treewidth. Finally, we resolve several open questions on the complexity of alliance problems in graphs. In particular, we settle the long-standing open questions of the complexity of the Secure Set problem and whether the Defensive Alliance problem is fixed-parameter tractable when parameterized by treewidth

    Re-implementing and Extending a Hybrid SAT-IP Approach to Maximum Satisfiability

    Get PDF
    Real-world optimization problems, such as those found in logistics and bioinformatics, are often NP-hard. Maximum satisfiability (MaxSAT) provides a framework within which many such problems can be efficiently represented. MaxHS is a recent exact algorithm for MaxSAT. It is a hybrid approach that uses a SAT solver to compute unsatisfiable cores and an integer programming (IP) solver to compute minimum-cost hitting sets for the found cores. This thesis analyzes and extends the MaxHS algorithm. To enable this, the algorithm is re-implemented from scratch using the C++ programming language. The resulting MaxSAT solver LMHS recently gained top positions at an international evaluation of MaxSAT solvers. This work looks into various aspects of the MaxHS algorithm and its applications. The impact of different IP solvers on the MaxHS algorithm and the behavior induced by different strategies of postponing IP solver calls is examined. New methods of enhancing the computation of unsatisfiable cores in MaxHS are examined. Fast core extraction through parallelization by partitioning soft clauses is explored. A modification of the final conflict analysis procedure of a SAT solver is used to generate additional cores without additional SAT solver invocations. The use of additional constraint propagation procedures in the SAT solver used by MaxHS is investigated. As a case study, acyclicity constraint propagation is implemented and its effectiveness for bounded treewidth Bayesian network structure learning using MaxSAT is evaluated. The extension of MaxHS to the labeled MaxSAT framework, which allows for more efficient use of preprocessing techniques and group MaxSAT encodings in MaxHS, is discussed. The re-implementation of the MaxHS algorithm, LMHS, also enables incrementality in efficiently adding constraints to a MaxSAT instance during the solving process. As a case study, this incrementality is used in solving subproblems with MaxSAT within GOBNILP, a tool for finding optimal Bayesian network structures

    Generalising weighted model counting

    Get PDF
    Given a formula in propositional or (finite-domain) first-order logic and some non-negative weights, weighted model counting (WMC) is a function problem that asks to compute the sum of the weights of the models of the formula. Originally used as a flexible way of performing probabilistic inference on graphical models, WMC has found many applications across artificial intelligence (AI), machine learning, and other domains. Areas of AI that rely on WMC include explainable AI, neural-symbolic AI, probabilistic programming, and statistical relational AI. WMC also has applications in bioinformatics, data mining, natural language processing, prognostics, and robotics. In this work, we are interested in revisiting the foundations of WMC and considering generalisations of some of the key definitions in the interest of conceptual clarity and practical efficiency. We begin by developing a measure-theoretic perspective on WMC, which suggests a new and more general way of defining the weights of an instance. This new representation can be as succinct as standard WMC but can also expand as needed to represent less-structured probability distributions. We demonstrate the performance benefits of the new format by developing a novel WMC encoding for Bayesian networks. We then show how existing WMC encodings for Bayesian networks can be transformed into this more general format and what conditions ensure that the transformation is correct (i.e., preserves the answer). Combining the strengths of the more flexible representation with the tricks used in existing encodings yields further efficiency improvements in Bayesian network probabilistic inference. Next, we turn our attention to the first-order setting. Here, we argue that the capabilities of practical model counting algorithms are severely limited by their inability to perform arbitrary recursive computations. To enable arbitrary recursion, we relax the restrictions that typically accompany domain recursion and generalise circuits (used to express a solution to a model counting problem) to graphs that are allowed to have cycles. These improvements enable us to find efficient solutions to counting fundamental structures such as injections and bijections that were previously unsolvable by any available algorithm. The second strand of this work is concerned with synthetic data generation. Testing algorithms across a wide range of problem instances is crucial to ensure the validity of any claim about one algorithm’s superiority over another. However, benchmarks are often limited and fail to reveal differences among the algorithms. First, we show how random instances of probabilistic logic programs (that typically use WMC algorithms for inference) can be generated using constraint programming. We also introduce a new constraint to control the independence structure of the underlying probability distribution and provide a combinatorial argument for the correctness of the constraint model. This model allows us to, for the first time, experimentally investigate inference algorithms on more than just a handful of instances. Second, we introduce a random model for WMC instances with a parameter that influences primal treewidth—the parameter most commonly used to characterise the difficulty of an instance. We show that the easy-hard-easy pattern with respect to clause density is different for algorithms based on dynamic programming and algebraic decision diagrams than for all other solvers. We also demonstrate that all WMC algorithms scale exponentially with respect to primal treewidth, although at differing rates

    REASONING OVER STRINGS AND OTHER UNBOUNDED DATA STRUCTURES

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference

    Get PDF
    corecore