75 research outputs found

    Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting

    Full text link
    Weighted model counting (WMC) has emerged as a prevalent approach for probabilistic inference. In its most general form, WMC is #P-hard. Weighted DNF counting (weighted #DNF) is a special case, where approximations with probabilistic guarantees are obtained in O(nm), where n denotes the number of variables, and m the number of clauses of the input DNF, but this is not scalable in practice. In this paper, we propose a neural model counting approach for weighted #DNF that combines approximate model counting with deep learning, and accurately approximates model counts in linear time when width is bounded. We conduct experiments to validate our method, and show that our model learns and generalizes very well to large-scale #DNF instances.Comment: To appear in Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20). Code and data available at: https://github.com/ralphabb/NeuralDNF

    Improving Model Finding for Integrated Quantitative-qualitative Spatial Reasoning With First-order Logic Ontologies

    Get PDF
    Many spatial standards are developed to harmonize the semantics and specifications of GIS data and for sophisticated reasoning. All these standards include some types of simple and complex geometric features, and some of them incorporate simple mereotopological relations. But the relations as used in these standards, only allow the extraction of qualitative information from geometric data and lack formal semantics that link geometric representations with mereotopological or other qualitative relations. This impedes integrated reasoning over qualitative data obtained from geometric sources and “native” topological information – for example as provided from textual sources where precise locations or spatial extents are unknown or unknowable. To address this issue, the first contribution in this dissertation is a first-order logical ontology that treats geometric features (e.g. polylines, polygons) and relations between them as specializations of more general types of features (e.g. any kind of 2D or 1D features) and mereotopological relations between them. Key to this endeavor is the use of a multidimensional theory of space wherein, unlike traditional logical theories of mereotopology (like RCC), spatial entities of different dimensions can co-exist and be related. However terminating or tractable reasoning with such an expressive ontology and potentially large amounts of data is a challenging AI problem. Model finding tools used to verify FOL ontologies with data usually employ a SAT solver to determine the satisfiability of the propositional instantiations (SAT problems) of the ontology. These solvers often experience scalability issues with increasing number of objects and size and complexity of the ontology, limiting its use to ontologies with small signatures and building small models with less than 20 objects. To investigate how an ontology influences the size of its SAT translation and consequently the model finder’s performance, we develop a formalization of FOL ontologies with data. We theoretically identify parameters of an ontology that significantly contribute to the dramatic growth in size of the SAT problem. The search space of the SAT problem is exponential in the signature of the ontology (the number of predicates in the axiomatization and any additional predicates from skolemization) and the number of distinct objects in the model. Axiomatizations that contain many definitions lead to large number of SAT propositional clauses. This is from the conversion of biconditionals to clausal form. We therefore postulate that optional definitions are ideal sentences that can be eliminated from an ontology to boost model finder’s performance. We then formalize optional definition elimination (ODE) as an FOL ontology preprocessing step and test the simplification on a set of spatial benchmark problems to generate smaller SAT problems (with fewer clauses and variables) without changing the satisfiability and semantic meaning of the problem. We experimentally demonstrate that the reduction in SAT problem size also leads to improved model finding with state-of-the-art model finders, with speedups of 10-99%. Altogether, this dissertation improves spatial reasoning capabilities using FOL ontologies – in terms of a formal framework for integrated qualitative-geometric reasoning, and specific ontology preprocessing steps that can be built into automated reasoners to achieve better speedups in model finding times, and scalability with moderately-sized datasets

    A Quantum Lovasz Local Lemma

    Full text link
    The Lovasz Local Lemma (LLL) is a powerful tool in probability theory to show the existence of combinatorial objects meeting a prescribed collection of "weakly dependent" criteria. We show that the LLL extends to a much more general geometric setting, where events are replaced with subspaces and probability is replaced with relative dimension, which allows to lower bound the dimension of the intersection of vector spaces under certain independence conditions. Our result immediately applies to the k-QSAT problem: For instance we show that any collection of rank 1 projectors with the property that each qubit appears in at most 2k/(e⋅k)2^k/(e \cdot k) of them, has a joint satisfiable state. We then apply our results to the recently studied model of random k-QSAT. Recent works have shown that the satisfiable region extends up to a density of 1 in the large k limit, where the density is the ratio of projectors to qubits. Using a hybrid approach building on work by Laumann et al. we greatly extend the known satisfiable region for random k-QSAT to a density of Ω(2k/k2)\Omega(2^k/k^2). Since our tool allows us to show the existence of joint satisfying states without the need to construct them, we are able to penetrate into regions where the satisfying states are conjectured to be entangled, avoiding the need to construct them, which has limited previous approaches to product states.Comment: 19 page

    Parameterized aspects of team-based formalisms and logical inference

    Get PDF
    Parameterized complexity is an interesting subfield of complexity theory that has received a lot of attention in recent years. Such an analysis characterizes the complexity of (classically) intractable problems by pinpointing the computational hardness to some structural aspects of the input. In this thesis, we study the parameterized complexity of various problems from the area of team-based formalisms as well as logical inference. In the context of team-based formalism, we consider propositional dependence logic (PDL). The problems of interest are model checking (MC) and satisfiability (SAT). Peter Lohmann studied the classical complexity of these problems as a part of his Ph.D. thesis proving that both MC and SAT are NP-complete for PDL. This thesis addresses the parameterized complexity of these problems with respect to a wealth of different parameterizations. Interestingly, SAT for PDL boils down to the satisfiability of propositional logic as implied by the downwards closure of PDL-formulas. We propose an interesting satisfiability variant (mSAT) asking for a satisfiable team of size m. The problem mSAT restores the ‘team semantic’ nature of satisfiability for PDL-formulas. We propose another problem (MaxSubTeam) asking for a maximal satisfiable team if a given team does not satisfy the input formula. From the area of logical inference, we consider (logic-based) abduction and argumentation. The problem of interest in abduction (ABD) is to determine whether there is an explanation for a manifestation in a knowledge base (KB). Following Pfandler et al., we also consider two of its variants by imposing additional restrictions over the size of an explanation (ABD and ABD=). In argumentation, our focus is on the argument existence (ARG), relevance (ARG-Rel) and verification (ARG-Check) problems. The complexity of these problems have been explored already in the classical setting, and each of them is known to be complete for the second level of the polynomial hierarchy (except for ARG-Check which is DP-complete) for propositional logic. Moreover, the work by Nord and Zanuttini (resp., Creignou et al.) explores the complexity of these problems with respect to various restrictions over allowed KBs for ABD (ARG). In this thesis, we explore a two-dimensional complexity analysis for these problems. The first dimension is the restrictions over KB in Schaefer’s framework (the same direction as Nord and Zanuttini and Creignou et al.). What differentiates the work in this thesis from an existing research on these problems is that we add another dimension, the parameterization. The results obtained in this thesis are interesting for two reasons. First (from a theoretical point of view), ideas used in our reductions can help in developing further reductions and prove (in)tractability results for related problems. Second (from a practical point of view), the obtained tractability results might help an agent designing an instance of a problem come up with the one for which the problem is tractable

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes
    • 

    corecore