680 research outputs found
The DLV System for Knowledge Representation and Reasoning
This paper presents the DLV system, which is widely considered the
state-of-the-art implementation of disjunctive logic programming, and addresses
several aspects. As for problem solving, we provide a formal definition of its
kernel language, function-free disjunctive logic programs (also known as
disjunctive datalog), extended by weak constraints, which are a powerful tool
to express optimization problems. We then illustrate the usage of DLV as a tool
for knowledge representation and reasoning, describing a new declarative
programming methodology which allows one to encode complex problems (up to
-complete problems) in a declarative fashion. On the foundational
side, we provide a detailed analysis of the computational complexity of the
language of DLV, and by deriving new complexity results we chart a complete
picture of the complexity of this language and important fragments thereof.
Furthermore, we illustrate the general architecture of the DLV system which
has been influenced by these results. As for applications, we overview
application front-ends which have been developed on top of DLV to solve
specific knowledge representation tasks, and we briefly describe the main
international projects investigating the potential of the system for industrial
exploitation. Finally, we report about thorough experimentation and
benchmarking, which has been carried out to assess the efficiency of the
system. The experimental results confirm the solidity of DLV and highlight its
potential for emerging application areas like knowledge management and
information integration.Comment: 56 pages, 9 figures, 6 table
Parameterized bounded-depth Frege is not optimal
A general framework for parameterized proof complexity was introduced by Dantchev, Martin, and Szeider [9]. There the authors concentrate on tree-like Parameterized Resolution-a parameterized version of classical Resolution-and their gap complexity theorem implies lower bounds for that system. The main result of the present paper significantly improves upon this by showing optimal lower bounds for a parameterized version of bounded-depth Frege. More precisely, we prove that the pigeonhole principle requires proofs of size n in parameterized bounded-depth Frege, and, as a special case, in dag-like Parameterized Resolution. This answers an open question posed in [9]. In the opposite direction, we interpret a well-known technique for FPT algorithms as a DPLL procedure for Parameterized Resolution. Its generalization leads to a proof search algorithm for Parameterized Resolution that in particular shows that tree-like Parameterized Resolution allows short refutations of all parameterized contradictions given as bounded-width CNF's
CASP Solutions for Planning in Hybrid Domains
CASP is an extension of ASP that allows for numerical constraints to be added
in the rules. PDDL+ is an extension of the PDDL standard language of automated
planning for modeling mixed discrete-continuous dynamics.
In this paper, we present CASP solutions for dealing with PDDL+ problems,
i.e., encoding from PDDL+ to CASP, and extensions to the algorithm of the EZCSP
CASP solver in order to solve CASP programs arising from PDDL+ domains. An
experimental analysis, performed on well-known linear and non-linear variants
of PDDL+ domains, involving various configurations of the EZCSP solver, other
CASP solvers, and PDDL+ planners, shows the viability of our solution.Comment: Under consideration in Theory and Practice of Logic Programming
(TPLP
Generalising weighted model counting
Given a formula in propositional or (finite-domain) first-order logic and some non-negative weights, weighted model counting (WMC) is a function problem that asks to compute the sum of the weights of the models of the formula. Originally used as a flexible way of performing probabilistic inference on graphical models, WMC has found many applications across artificial intelligence (AI), machine learning, and other domains. Areas of AI that rely on WMC include explainable AI, neural-symbolic AI, probabilistic programming, and statistical relational AI. WMC also has applications in bioinformatics, data mining, natural language processing, prognostics, and robotics.
In this work, we are interested in revisiting the foundations of WMC and considering generalisations of some of the key definitions in the interest of conceptual clarity and practical efficiency. We begin by developing a measure-theoretic perspective on WMC, which suggests a new and more general way of defining the weights of an instance. This new representation can be as succinct as standard WMC but can also expand as needed to represent less-structured probability distributions. We demonstrate the performance benefits of the new format by developing a novel WMC encoding for Bayesian networks. We then show how existing WMC encodings for Bayesian networks can be transformed into this more general format and what conditions ensure that the transformation is correct (i.e., preserves the answer). Combining the strengths of the more flexible representation with the tricks used in existing encodings yields further efficiency improvements in Bayesian network probabilistic inference.
Next, we turn our attention to the first-order setting. Here, we argue that the capabilities of practical model counting algorithms are severely limited by their inability to perform arbitrary recursive computations. To enable arbitrary recursion, we relax the restrictions that typically accompany domain recursion and generalise circuits (used to express a solution to a model counting problem) to graphs that are allowed to have cycles. These improvements enable us to find efficient solutions to counting fundamental structures such as injections and bijections that were previously unsolvable by any available algorithm.
The second strand of this work is concerned with synthetic data generation. Testing algorithms across a wide range of problem instances is crucial to ensure the validity of any claim about one algorithmâs superiority over another. However, benchmarks are often limited and fail to reveal differences among the algorithms. First, we show how random instances of probabilistic logic programs (that typically use WMC algorithms for inference) can be generated using constraint programming. We also introduce a new constraint to control the independence structure of the underlying probability distribution and provide a combinatorial argument for the correctness of the constraint model. This model allows us to, for the first time, experimentally investigate inference algorithms on more than just a handful of instances. Second, we introduce a random model for WMC instances with a parameter that influences primal treewidthâthe parameter most commonly used to characterise the difficulty of an instance. We show that the easy-hard-easy pattern with respect to clause density is different for algorithms based on dynamic programming and algebraic decision diagrams than for all other solvers. We also demonstrate that all WMC algorithms scale exponentially with respect to primal treewidth, although at differing rates
Boosting Answer Set Optimization with Weighted Comparator Networks
Answer set programming (ASP) is a paradigm for modeling knowledge intensive
domains and solving challenging reasoning problems. In ASP solving, a typical
strategy is to preprocess problem instances by rewriting complex rules into
simpler ones. Normalization is a rewriting process that removes extended rule
types altogether in favor of normal rules. Recently, such techniques led to
optimization rewriting in ASP, where the goal is to boost answer set
optimization by refactoring the optimization criteria of interest. In this
paper, we present a novel, general, and effective technique for optimization
rewriting based on comparator networks, which are specific kinds of circuits
for reordering the elements of vectors. The idea is to connect an ASP encoding
of a comparator network to the literals being optimized and to redistribute the
weights of these literals over the structure of the network. The encoding
captures information about the weight of an answer set in auxiliary atoms in a
structured way that is proven to yield exponential improvements during
branch-and-bound optimization on an infinite family of example programs. The
used comparator network can be tuned freely, e.g., to find the best size for a
given benchmark class. Experiments show accelerated optimization performance on
several benchmark problems.Comment: 36 page
Cost-optimal constrained correlation clustering via weighted partial Maximum Satisfiability
Peer reviewe
The Performance Optimization of ASP Solving Based on Encoding Rewriting and Encoding Selection
Answer set programming (ASP) has long been used for modeling and solving hard search problems. These problems are modeled in ASP as encodings, a collection of rules that declaratively describe the logic of the problem without explicitly listing how to solve it. It is common that the same problem has several different but equivalent encodings in ASP. Experience shows that the performance of these ASP encodings may vary greatly from instance to instance when processed by current state-of-the-art ASP grounder/solver systems. In particular, it is rarely the case that one encoding outperforms all others. Moreover, running an ASP system on one encoding for a specific instance may âtake forever,â while running it on another encoding for this instance may yield a solution in a fraction of a second. The selection of a âgoodâ encoding for each instance is crucial to the performance of ASP solving. In this dissertation, I propose methods to improve the performance of ASP solving that exploit these observations. First, I designed and implemented methods that, given an encoding for a problem, rewrite it in several ways into new different but equivalent encodings. Second, I designed and implemented a system that given a set of input encodings of a problem, a set of problem instances, and an ASP grounder/solver system, automatically generates equivalent encodings and builds for each selected encoding its performance model. The model predicts for any instance the execution time that the grounder/solver system takes to process the instance under the corresponding encoding. These performance models are then used to improve solving efficiency: whenever a new instance arrives, the system selects the encoding predicted to perform the best on the instance and invokes the grounder/solver. The system also supports a scheduled execution and an interleaved execution of encodings, which are complementary to machine learning techniques. Third, I implemented algorithms that generate hard structured instances for several combinatorial problems I selected for our experimental study of the efficacy of the methods I developed. Hard instances can serve as the benchmark for evaluating the hardness of specific problems and contribute as training data to the platform I created to help build encoding selection models. The process can also provide meaningful insights into finding hard instances of other combinatorial problems
- âŠ