42,723 research outputs found

    Disjunctive Answer Set Solvers via Templates

    Get PDF
    Answer set programming is a declarative programming paradigm oriented towards difficult combinatorial search problems. A fundamental task in answer set programming is to compute stable models, i.e., solutions of logic programs. Answer set solvers are the programs that perform this task. The problem of deciding whether a disjunctive program has a stable model is Σ2P\Sigma^P_2-complete. The high complexity of reasoning within disjunctive logic programming is responsible for few solvers capable of dealing with such programs, namely DLV, GnT, Cmodels, CLASP and WASP. In this paper we show that transition systems introduced by Nieuwenhuis, Oliveras, and Tinelli to model and analyze satisfiability solvers can be adapted for disjunctive answer set solvers. Transition systems give a unifying perspective and bring clarity in the description and comparison of solvers. They can be effectively used for analyzing, comparing and proving correctness of search algorithms as well as inspiring new ideas in the design of disjunctive answer set solvers. In this light, we introduce a general template, which accounts for major techniques implemented in disjunctive solvers. We then illustrate how this general template captures solvers DLV, GnT and Cmodels. We also show how this framework provides a convenient tool for designing new solving algorithms by means of combinations of techniques employed in different solvers.Comment: To appear in Theory and Practice of Logic Programming (TPLP

    Deterministic polynomial-time approximation algorithms for partition functions and graph polynomials

    Full text link
    In this paper we show a new way of constructing deterministic polynomial-time approximation algorithms for computing complex-valued evaluations of a large class of graph polynomials on bounded degree graphs. In particular, our approach works for the Tutte polynomial and independence polynomial, as well as partition functions of complex-valued spin and edge-coloring models. More specifically, we define a large class of graph polynomials C\mathcal C and show that if p∈Cp\in \cal C and there is a disk DD centered at zero in the complex plane such that p(G)p(G) does not vanish on DD for all bounded degree graphs GG, then for each zz in the interior of DD there exists a deterministic polynomial-time approximation algorithm for evaluating p(G)p(G) at zz. This gives an explicit connection between absence of zeros of graph polynomials and the existence of efficient approximation algorithms, allowing us to show new relationships between well-known conjectures. Our work builds on a recent line of work initiated by. Barvinok, which provides a new algorithmic approach besides the existing Markov chain Monte Carlo method and the correlation decay method for these types of problems.Comment: 27 pages; some changes have been made based on referee comments. In particular a tiny error in Proposition 4.4 has been fixed. The introduction and concluding remarks have also been rewritten to incorporate the most recent developments. Accepted for publication in SIAM Journal on Computatio

    On the equivalence between graph isomorphism testing and function approximation with GNNs

    Full text link
    Graph neural networks (GNNs) have achieved lots of success on graph-structured data. In the light of this, there has been increasing interest in studying their representation power. One line of work focuses on the universal approximation of permutation-invariant functions by certain classes of GNNs, and another demonstrates the limitation of GNNs via graph isomorphism tests. Our work connects these two perspectives and proves their equivalence. We further develop a framework of the representation power of GNNs with the language of sigma-algebra, which incorporates both viewpoints. Using this framework, we compare the expressive power of different classes of GNNs as well as other methods on graphs. In particular, we prove that order-2 Graph G-invariant networks fail to distinguish non-isomorphic regular graphs with the same degree. We then extend them to a new architecture, Ring-GNNs, which succeeds on distinguishing these graphs and provides improvements on real-world social network datasets

    Superexpanders from group actions on compact manifolds

    Full text link
    It is known that the expanders arising as increasing sequences of level sets of warped cones, as introduced by the second-named author, do not coarsely embed into a Banach space as soon as the corresponding warped cone does not coarsely embed into this Banach space. Combining this with non-embeddability results for warped cones by Nowak and Sawicki, which relate the non-embeddability of a warped cone to a spectral gap property of the underlying action, we provide new examples of expanders that do not coarsely embed into any Banach space with nontrivial type. Moreover, we prove that these expanders are not coarsely equivalent to a Lafforgue expander. In particular, we provide infinitely many coarsely distinct superexpanders that are not Lafforgue expanders. In addition, we prove a quasi-isometric rigidity result for warped cones.Comment: 16 pages, to appear in Geometriae Dedicat

    Representation Learning on Graphs: A Reinforcement Learning Application

    Full text link
    In this work, we study value function approximation in reinforcement learning (RL) problems with high dimensional state or action spaces via a generalized version of representation policy iteration (RPI). We consider the limitations of proto-value functions (PVFs) at accurately approximating the value function in low dimensions and we highlight the importance of features learning for an improved low-dimensional value function approximation. Then, we adopt different representation learning algorithm on graphs to learn the basis functions that best represent the value function. We empirically show that node2vec, an algorithm for scalable feature learning in networks, and the Variational Graph Auto-Encoder constantly outperform the commonly used smooth proto-value functions in low-dimensional feature space
    • …
    corecore