144,163 research outputs found

    A Knowledge Compilation Map

    Full text link
    We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. We then provide a knowledge compilation map, which analyzes a large number of existing target compilation languages according to their succinctness and their polytime transformations and queries. We argue that such analysis is necessary for placing new compilation approaches within the context of existing ones. We also go beyond classical, flat target compilation languages based on CNF and DNF, and consider a richer, nested class based on directed acyclic graphs (such as OBDDs), which we show to include a relatively large number of target compilation languages

    On the Complexity of Optimization Problems based on Compiled NNF Representations

    Full text link
    Optimization is a key task in a number of applications. When the set of feasible solutions under consideration is of combinatorial nature and described in an implicit way as a set of constraints, optimization is typically NP-hard. Fortunately, in many problems, the set of feasible solutions does not often change and is independent from the user's request. In such cases, compiling the set of constraints describing the set of feasible solutions during an off-line phase makes sense, if this compilation step renders computationally easier the generation of a non-dominated, yet feasible solution matching the user's requirements and preferences (which are only known at the on-line step). In this article, we focus on propositional constraints. The subsets L of the NNF language analyzed in Darwiche and Marquis' knowledge compilation map are considered. A number of families F of representations of objective functions over propositional variables, including linear pseudo-Boolean functions and more sophisticated ones, are considered. For each language L and each family F, the complexity of generating an optimal solution when the constraints are compiled into L and optimality is to be considered w.r.t. a function from F is identified

    Reduced Ordered Binary Decision Diagram with Implied Literals: A New knowledge Compilation Approach

    Full text link
    Knowledge compilation is an approach to tackle the computational intractability of general reasoning problems. According to this approach, knowledge bases are converted off-line into a target compilation language which is tractable for on-line querying. Reduced ordered binary decision diagram (ROBDD) is one of the most influential target languages. We generalize ROBDD by associating some implied literals in each node and the new language is called reduced ordered binary decision diagram with implied literals (ROBDD-L). Then we discuss a kind of subsets of ROBDD-L called ROBDD-i with precisely i implied literals (0 \leq i \leq \infty). In particular, ROBDD-0 is isomorphic to ROBDD; ROBDD-\infty requires that each node should be associated by the implied literals as many as possible. We show that ROBDD-i has uniqueness over some specific variables order, and ROBDD-\infty is the most succinct subset in ROBDD-L and can meet most of the querying requirements involved in the knowledge compilation map. Finally, we propose an ROBDD-i compilation algorithm for any i and a ROBDD-\infty compilation algorithm. Based on them, we implement a ROBDD-L package called BDDjLu and then get some conclusions from preliminary experimental results: ROBDD-\infty is obviously smaller than ROBDD for all benchmarks; ROBDD-\infty is smaller than the d-DNNF the benchmarks whose compilation results are relatively small; it seems that it is better to transform ROBDDs-\infty into FBDDs and ROBDDs rather than straight compile the benchmarks.Comment: 18 pages, 13 figure

    Improvement and evaluation of simulated global biogenic soil NO emissions in an AC-GCM

    Get PDF
    Biogenic NO emissions from soils (SNOx) play important direct and indirect roles in tropospheric chemistry. The most widely applied algorithm to calculate SNOx in global models was published 15 years ago by Yienger and Levy (1995), and was based on very few measurements. Since then, numerous new measurements have been published, which we used to build up a compilation of world wide field measurements covering the period from 1978 to 2010. Recently, several satellite-based top-down approaches, which recalculated the different sources of NOx (fossil fuel, biomass burning, soil and lightning), have shown an underestimation of SNOx by the algorithm of Yienger and Levy (1995). Nevertheless, to our knowledge no general improvements of this algorithm, besides suggested scalings of the total source magnitude, have yet been published. Here we present major improvements to the algorithm, which should help to optimize the representation of SNOx in atmospheric-chemistry global climate models, without modifying the underlying principals or mathematical equations. The changes include: (1) using a new landcover map, with twice the number of landcover classes, and using annually varying fertilizer application rates; (2) adopting a fraction of 1.0 % for the applied fertilizer lost as NO, based on our compilation of measurements; (3) using the volumetric soil moisture to distinguish between the wet and dry states; and (4) adjusting the emission factors to reproduce the measured emissions in our compilation (based on either their geometric or arithmetic mean values). These steps lead to increased global annual SNOx, and our total above canopy SNOx source of 8.6 Tg yr−1 (using the geometric mean) ends up being close to one of the satellite-based top-down approaches (8.9 Tg yr−1). The above canopy SNOx source using the arithmetic mean is 27.6 Tg yr−1, which is higher than all previous estimates, but compares better with a regional top-down study in eastern China. This suggests that both top-down and bottom-up approaches will be needed in future attempts to provide a better calculation of SNOx

    Knowledge Compilation of Logic Programs Using Approximation Fixpoint Theory

    Full text link
    To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 2015 Recent advances in knowledge compilation introduced techniques to compile \emph{positive} logic programs into propositional logic, essentially exploiting the constructive nature of the least fixpoint computation. This approach has several advantages over existing approaches: it maintains logical equivalence, does not require (expensive) loop-breaking preprocessing or the introduction of auxiliary variables, and significantly outperforms existing algorithms. Unfortunately, this technique is limited to \emph{negation-free} programs. In this paper, we show how to extend it to general logic programs under the well-founded semantics. We develop our work in approximation fixpoint theory, an algebraical framework that unifies semantics of different logics. As such, our algebraical results are also applicable to autoepistemic logic, default logic and abstract dialectical frameworks

    Building Efficient Query Engines in a High-Level Language

    Get PDF
    Abstraction without regret refers to the vision of using high-level programming languages for systems development without experiencing a negative impact on performance. A database system designed according to this vision offers both increased productivity and high performance, instead of sacrificing the former for the latter as is the case with existing, monolithic implementations that are hard to maintain and extend. In this article, we realize this vision in the domain of analytical query processing. We present LegoBase, a query engine written in the high-level language Scala. The key technique to regain efficiency is to apply generative programming: LegoBase performs source-to-source compilation and optimizes the entire query engine by converting the high-level Scala code to specialized, low-level C code. We show how generative programming allows to easily implement a wide spectrum of optimizations, such as introducing data partitioning or switching from a row to a column data layout, which are difficult to achieve with existing low-level query compilers that handle only queries. We demonstrate that sufficiently powerful abstractions are essential for dealing with the complexity of the optimization effort, shielding developers from compiler internals and decoupling individual optimizations from each other. We evaluate our approach with the TPC-H benchmark and show that: (a) With all optimizations enabled, LegoBase significantly outperforms a commercial database and an existing query compiler. (b) Programmers need to provide just a few hundred lines of high-level code for implementing the optimizations, instead of complicated low-level code that is required by existing query compilation approaches. (c) The compilation overhead is low compared to the overall execution time, thus making our approach usable in practice for compiling query engines

    Trading inference effort versus size in CNF Knowledge Compilation

    Full text link
    Knowledge Compilation (KC) studies compilation of boolean functions f into some formalism F, which allows to answer all queries of a certain kind in polynomial time. Due to its relevance for SAT solving, we concentrate on the query type "clausal entailment" (CE), i.e., whether a clause C follows from f or not, and we consider subclasses of CNF, i.e., clause-sets F with special properties. In this report we do not allow auxiliary variables (except of the Outlook), and thus F needs to be equivalent to f. We consider the hierarchies UC_k <= WC_k, which were introduced by the authors in 2012. Each level allows CE queries. The first two levels are well-known classes for KC. Namely UC_0 = WC_0 is the same as PI as studied in KC, that is, f is represented by the set of all prime implicates, while UC_1 = WC_1 is the same as UC, the class of unit-refutation complete clause-sets introduced by del Val 1994. We show that for each k there are (sequences of) boolean functions with polysize representations in UC_{k+1}, but with an exponential lower bound on representations in WC_k. Such a separation was previously only know for k=0. We also consider PC < UC, the class of propagation-complete clause-sets. We show that there are (sequences of) boolean functions with polysize representations in UC, while there is an exponential lower bound for representations in PC. These separations are steps towards a general conjecture determining the representation power of the hierarchies PC_k < UC_k <= WC_k. The strong form of this conjecture also allows auxiliary variables, as discussed in depth in the Outlook.Comment: 43 pages, second version with literature updates. Proceeds with the separation results from the discontinued arXiv:1302.442
    corecore