7 research outputs found
Recursive Online Enumeration of All Minimal Unsatisfiable Subsets
In various areas of computer science, we deal with a set of constraints to be
satisfied. If the constraints cannot be satisfied simultaneously, it is
desirable to identify the core problems among them. Such cores are called
minimal unsatisfiable subsets (MUSes). The more MUSes are identified, the more
information about the conflicts among the constraints is obtained. However, a
full enumeration of all MUSes is in general intractable due to the large number
(even exponential) of possible conflicts. Moreover, to identify MUSes
algorithms must test sets of constraints for their simultaneous satisfiabilty.
The type of the test depends on the application domains. The complexity of
tests can be extremely high especially for domains like temporal logics, model
checking, or SMT. In this paper, we propose a recursive algorithm that
identifies MUSes in an online manner (i.e., one by one) and can be terminated
at any time. The key feature of our algorithm is that it minimizes the number
of satisfiability tests and thus speeds up the computation. The algorithm is
applicable to an arbitrary constraint domain and its effectiveness demonstrates
itself especially in domains with expensive satisfiability checks. We benchmark
our algorithm against state of the art algorithm on Boolean and SMT constraint
domains and demonstrate that our algorithm really requires less satisfiability
tests and consequently finds more MUSes in given time limits
Core-guided minimal correction set and core enumeration
A set of constraints is unsatisfiable if there is no solution that satisfies these constraints. To analyse unsatisfiable problems, the user needs to understand where inconsistencies come from and how they can be repaired. Minimal unsatisfiable cores and correction sets are important subsets of constraints that enable such analysis. In this work, we propose a new algorithm for extracting minimal unsatisfiable cores and correction sets simultaneously. Building on top of the relaxation and strengthening framework, we introduce novel techniques for extracting these sets. Our new solver significantly outperforms several state of the art algorithms on common benchmarks when it comes to extracting correction sets and compares favorably on core extraction.Peer ReviewedPostprint (published version
Efficiently Explaining CSPs with Unsatisfiable Subset Optimization (extended algorithms and examples)
We build on a recently proposed method for stepwise explaining solutions of
Constraint Satisfaction Problems (CSP) in a human-understandable way. An
explanation here is a sequence of simple inference steps where simplicity is
quantified using a cost function. The algorithms for explanation generation
rely on extracting Minimal Unsatisfiable Subsets (MUS) of a derived
unsatisfiable formula, exploiting a one-to-one correspondence between so-called
non-redundant explanations and MUSs. However, MUS extraction algorithms do not
provide any guarantee of subset minimality or optimality with respect to a
given cost function. Therefore, we build on these formal foundations and tackle
the main points of improvement, namely how to generate explanations efficiently
that are provably optimal (with respect to the given cost metric). For that, we
developed (1) a hitting set-based algorithm for finding the optimal constrained
unsatisfiable subsets; (2) a method for re-using relevant information over
multiple algorithm calls; and (3) methods exploiting domain-specific
information to speed up the explanation sequence generation. We experimentally
validated our algorithms on a large number of CSP problems. We found that our
algorithms outperform the MUS approach in terms of explanation quality and
computational time (on average up to 56 % faster than a standard MUS approach).Comment: arXiv admin note: text overlap with arXiv:2105.1176
Logic-Based Explainability in Machine Learning
The last decade witnessed an ever-increasing stream of successes in Machine
Learning (ML). These successes offer clear evidence that ML is bound to become
pervasive in a wide range of practical uses, including many that directly
affect humans. Unfortunately, the operation of the most successful ML models is
incomprehensible for human decision makers. As a result, the use of ML models,
especially in high-risk and safety-critical settings is not without concern. In
recent years, there have been efforts on devising approaches for explaining ML
models. Most of these efforts have focused on so-called model-agnostic
approaches. However, all model-agnostic and related approaches offer no
guarantees of rigor, hence being referred to as non-formal. For example, such
non-formal explanations can be consistent with different predictions, which
renders them useless in practice. This paper overviews the ongoing research
efforts on computing rigorous model-based explanations of ML models; these
being referred to as formal explanations. These efforts encompass a variety of
topics, that include the actual definitions of explanations, the
characterization of the complexity of computing explanations, the currently
best logical encodings for reasoning about different ML models, and also how to
make explanations interpretable for human decision makers, among others
Timed Automata Robustness Analysis via Model Checking
Timed automata (TA) have been widely adopted as a suitable formalism to model
time-critical systems. Furthermore, contemporary model-checking tools allow the
designer to check whether a TA complies with a system specification. However,
the exact timing constants are often uncertain during the design phase.
Consequently, the designer is often able to build a TA with a correct
structure, however, the timing constants need to be tuned to satisfy the
specification. Moreover, even if the TA initially satisfies the specification,
it can be the case that just a slight perturbation during the implementation
causes a violation of the specification. Unfortunately, model-checking tools
are usually not able to provide any reasonable guidance on how to fix the model
in such situations. In this paper, we propose several concepts and techniques
to cope with the above mentioned design phase issues when dealing with
reachability and safety specifications
Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021
The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
Tools and Algorithms for the Construction and Analysis of Systems
This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers