133 research outputs found
A Multi-Engine Approach to Answer Set Programming
Answer Set Programming (ASP) is a truly-declarative programming paradigm
proposed in the area of non-monotonic reasoning and logic programming, that has
been recently employed in many applications. The development of efficient ASP
systems is, thus, crucial. Having in mind the task of improving the solving
methods for ASP, there are two usual ways to reach this goal: extending
state-of-the-art techniques and ASP solvers, or designing a new ASP
solver from scratch. An alternative to these trends is to build on top of
state-of-the-art solvers, and to apply machine learning techniques for choosing
automatically the "best" available solver on a per-instance basis.
In this paper we pursue this latter direction. We first define a set of
cheap-to-compute syntactic features that characterize several aspects of ASP
programs. Then, we apply classification methods that, given the features of the
instances in a {\sl training} set and the solvers' performance on these
instances, inductively learn algorithm selection strategies to be applied to a
{\sl test} set. We report the results of a number of experiments considering
solvers and different training and test sets of instances taken from the ones
submitted to the "System Track" of the 3rd ASP Competition. Our analysis shows
that, by applying machine learning techniques to ASP solving, it is possible to
obtain very robust performance: our approach can solve more instances compared
with any solver that entered the 3rd ASP Competition. (To appear in Theory and
Practice of Logic Programming (TPLP).)Comment: 26 pages, 8 figure
Automated Deduction – CADE 28
This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions
On modal expansions of t-norm based logics with rational constants
[eng] According to Zadeh, the term “fuzzy logic” has two different meanings: wide and narrow. In a narrow sense it is a logical system which aims a formalization of approximate reasoning, and so it can be considered an extension of many-valued logic. However, Zadeh also says that the agenda of fuzzy logic is quite different from that of traditional many-valued logic, as it addresses concepts like linguistic variable, fuzzy if-then rule, linguistic quantifiers etc. Hájek, in the preface of his foundational book Metamathematics of Fuzzy Logic, agrees with Zadeh’s distinction, but stressing that formal calculi of many-valued logics are the kernel of the so-called Basic Fuzzy logic (BL), having continuous triangular norms (t-norm) and their residua as semantics for the conjunction and implication respectively, and of its most prominent extensions, namely Lukasiewicz, Gödel and Product fuzzy logics. Taking advantage of the fact that a t-norm has residuum if, and only if, it is left-continuous, the logic of the left-continuous t-norms, called MTL, was soon after introduced. On the other hand, classical modal logic is an active field of mathematical logic, originally introduced at the beginning of the XXth century for philosophical purposes, that more recently has shown to be very successful in many other areas, specially in computer science. That are the most well-known semantics for classical modal logics. Modal expansions of non-classical logics, in particular of many-valued logics, have also been studied in the literature. In this thesis we focus on the study of some modal logics over MTL, using natural generalizations of the classical Kripke relational structures where propositions at possible words can be many-valued, but keeping classical accessibility relations. In more detail, the main goal of this thesis has been to study modal expansions of the logic of a left-continuous t-norm, defined over the language of MTL expanded with rational truth-constants and the Monteiro-Baaz Delta-operator, whose intended (standard) semantics is given by Kripke models with crisp accessibility relations and taking the unit real interval [0, 1] as set of truth-values. To get complete axiomatizations, already known techniques based on the canonical model construction are uses, but this requires to ensure that the underlying (propositional) fuzzy logic is strongly standard complete. This constraint leads us to consider axiomatic systems with infinitary inference rules, already at the propositional level. A second goal of the thesis has been to also develop and automated reasoning software tool to solve satisfiability and logical consequence problems for some of the fuzzy logic modal logics considered. This dissertation is structured in four parts. After a gentle introduction, Part I contains the needed preliminaries for the thesis be as self-contained as possible. Most of the theoretical results are developed in Parts II and III. Part II focuses on solving some problems concerning the strong standard completeness of underlying non-modal expansions. We first present and axiomatic system for the non-nodal propositional logic of a left-continuous t-norm who makes use of a unique infinitary inference rule, the “density rule”, that solves several problems pointed out in the literature. We further expand this axiomatic system in order to also characterize arbitrary operations over [0, 1] satisfying certain regularity conditions. However, since this axiomatic system turn out to be not well-behaved for the modal expansion, we search for alternative axiomatizations with some particular kind of inference rules (that will be called conjunctive). Unfortunately, this kind of axiomatization does not necessarily exist for all left-continuous t-norms (in particular, it does not exist for the Gödel logic case), but we identify a wide class of t-norms for which it works. This “well-behaved” t-norms include all ordinal sums of Lukasiewiczand Product t-norms. Part III focuses on the modal expansion of the logics presented before. We propose axiomatic systems (which are, as expected, modal expansions of the ones given in the previous part) respectively strongly complete with respect to local and global Kripke semantics defined over frames with crisp accessibility relations and worlds evaluated over a “well-behaved” left-continuous t-norm. We also study some properties and extensions of these logics and also show how to use it for axiomatizing the possibilistic logic over the very same t-norm. Later on, we characterize the algebraic companion of these modal logics, provide some algebraic completeness results and study the relation between their Kripke and algebraic semantics. Finally, Part IV of the thesis is devoted to a software application, mNiB-LoS, who uses Satisfability Modulo Theories in order to build an automated reasoning system to reason over modal logics evaluated over BL algebras. The acronym of this applications stands for a modal Nice BL-logics Solver. The use of BL logics along this part is motivated by the fact that continuous t-norms can be represented as ordinal sums of three particular t-norms: Gödel, Lukasiewicz and Product ones. It is then possible to show that these t-norms have alternative characterizations that, although equivalent from the point of view of the logic, have strong differences for what concerns the design, implementation and efficiency of the application. For practical reasons, the modal structures included in the solver are limited to the finite ones (with no bound on the cardinality)
Recommended from our members
Efficient Sampling of SAT and SMT Solutions for Testing and Verification
The problem of generating a large number of diverse solutions to a logical constraint has important applications in testing, verification, and synthesis for both software and hardware. The solutions generated could be used as inputs that exercise some target functionality in a program or as random stimuli to a hardware module. This sampling of solutions can be combined with techniques such as fuzz testing, symbolic execution, and constrained-random verification to uncover bugs and vulnerabilities in real programs and hardware designs. Stimulus generation, in particular, is an essential part of hardware verification, being at the core of widely applied constrained-random verification techniques. For all these applications, the generation of multiple solutions instead of a single solution can lead to better coverage and higher probability of finding bugs. However, generating such solutions efficiently, while achieving a good coverage of the constraint space, is still a challenge today. Moreover, the problem is amplified when the constraints are complex formulas involving several different theories and when the application requires more refined coverage criteria from the solutions.This work presents three novel techniques developed to tackle the problem of efficient sampling of solutions to logical constraints. They allow the efficient generation of millions of solutions with only tens of queries to a constraint solver, being orders of magnitude faster than previous state-of-the-art samplers. First, a technique called QuickSampler, for sampling of solutions to Boolean (SAT) constraints, with the goal of achieving a close to uniform distribution. Second, a technique called SMTSampler, which is designed to sample solutions to large and complex Satisfiability Modulo Theories (SMT) constraints and aims at providing a good coverage of the constraint itself. Third, a technique called GuidedSampler, which enables coverage-guided sampling of SMT constraints, by shaping the distribution of solutions in a problem-specific basis.The QuickSampler algorithm takes as input a Boolean constraint and uses only a small number of calls to a constraint solver in order to produce millions of samples in a few seconds or minutes. The samples satisfy the constraints with high probability (i.e., 75%), and the invalid samples can be easily filtered out in a post-processing step. Our evaluation of QuickSampler on large real-world benchmarks shows that it can produce unique valid solutions orders of magnitude faster than other state-of-the-art sampling tools. We have also empirically verified that the distribution of solutions is close to uniform, which was our target distribution.SMTSampler is an extension of the technique that allows efficient sampling of solutions from Satisfiability Modulo Theories (SMT) constraints. This is important, since many constraints found in practical applications are more naturally represented by SMT formulas that include theories such as arrays and bit-vectors. By working over SMT formulas directly, without encoding them into Boolean (SAT) constraints, SMTSampler is able to sample solutions more efficiently, and also achieve a better coverage of the constraint space. In our evaluation, we have also defined a new notion of coverage that better captures the diversity of SMT solutions, and have shown that SMTSampler helps improve this coverage. SMTSampler works similarly to QuickSampler, leveraging a small number of calls to a constraint solver in order to generate up to millions of stimuli. However, SMTSampler can sample random solutions from large and complex SMT formulas with bit-vectors, arrays, and uninterpreted functions. It also checks all samples for validity, only outputting valid and unique solutions to the formula. Our evaluation on hundreds of benchmarks from SMT-LIB shows that SMTSampler can handle a larger class of SMT problems, outperforming QuickSampler in the number of samples produced and the coverage of the constraint space.GuidedSampler is an extension of SMTSampler that allows coverage-guided sampling of SMT solutions, by letting the user specify a desired set of coverage points that will shape the distribution of solutions. This is important because most current sampling techniques lack a problem-specific notion of coverage, considering only general goals such as uniform distribution, as in QuickSampler, or the coverage of the SMT formula, as in SMTSampler. However, many applications would benefit from a more specific coverage definition, for example, based on coverage points specified by the hardware designer. Our tool GuidedSampler enables this greater flexibility by using the specified coverage points to guide the sampling algorithm into generating solutions from diverse coverage classes. And even for applications where a general notion of coverage suffices, our evaluation shows that the coverage-guided sampling approach is more effective at achieving this desired coverage. GuidedSampler is thus able to efficiently generate high-quality stimuli for constrained-random verification, by sampling solutions to SMT constraints that also cover a large number of user-defined coverage classes
- …