120 research outputs found

    Recursive Online Enumeration of All Minimal Unsatisfiable Subsets

    Full text link
    In various areas of computer science, we deal with a set of constraints to be satisfied. If the constraints cannot be satisfied simultaneously, it is desirable to identify the core problems among them. Such cores are called minimal unsatisfiable subsets (MUSes). The more MUSes are identified, the more information about the conflicts among the constraints is obtained. However, a full enumeration of all MUSes is in general intractable due to the large number (even exponential) of possible conflicts. Moreover, to identify MUSes algorithms must test sets of constraints for their simultaneous satisfiabilty. The type of the test depends on the application domains. The complexity of tests can be extremely high especially for domains like temporal logics, model checking, or SMT. In this paper, we propose a recursive algorithm that identifies MUSes in an online manner (i.e., one by one) and can be terminated at any time. The key feature of our algorithm is that it minimizes the number of satisfiability tests and thus speeds up the computation. The algorithm is applicable to an arbitrary constraint domain and its effectiveness demonstrates itself especially in domains with expensive satisfiability checks. We benchmark our algorithm against state of the art algorithm on Boolean and SMT constraint domains and demonstrate that our algorithm really requires less satisfiability tests and consequently finds more MUSes in given time limits

    On Exploiting Hitting Sets for Model Reconciliation

    Full text link
    In human-aware planning, a planning agent may need to provide an explanation to a human user on why its plan is optimal. A popular approach to do this is called model reconciliation, where the agent tries to reconcile the differences in its model and the human's model such that the plan is also optimal in the human's model. In this paper, we present a logic-based framework for model reconciliation that extends beyond the realm of planning. More specifically, given a knowledge base KB1KB_1 entailing a formula φ\varphi and a second knowledge base KB2KB_2 not entailing it, model reconciliation seeks an explanation, in the form of a cardinality-minimal subset of KB1KB_1, whose integration into KB2KB_2 makes the entailment possible. Our approach, based on ideas originating in the context of analysis of inconsistencies, exploits the existing hitting set duality between minimal correction sets (MCSes) and minimal unsatisfiable sets (MUSes) in order to identify an appropriate explanation. However, differently from those works targeting inconsistent formulas, which assume a single knowledge base, MCSes and MUSes are computed over two distinct knowledge bases. We conclude our paper with an empirical evaluation of the newly introduced approach on planning instances, where we show how it outperforms an existing state-of-the-art solver, and generic non-planning instances from recent SAT competitions, for which no other solver exists

    Logic-Based Explainability in Machine Learning

    Full text link
    The last decade witnessed an ever-increasing stream of successes in Machine Learning (ML). These successes offer clear evidence that ML is bound to become pervasive in a wide range of practical uses, including many that directly affect humans. Unfortunately, the operation of the most successful ML models is incomprehensible for human decision makers. As a result, the use of ML models, especially in high-risk and safety-critical settings is not without concern. In recent years, there have been efforts on devising approaches for explaining ML models. Most of these efforts have focused on so-called model-agnostic approaches. However, all model-agnostic and related approaches offer no guarantees of rigor, hence being referred to as non-formal. For example, such non-formal explanations can be consistent with different predictions, which renders them useless in practice. This paper overviews the ongoing research efforts on computing rigorous model-based explanations of ML models; these being referred to as formal explanations. These efforts encompass a variety of topics, that include the actual definitions of explanations, the characterization of the complexity of computing explanations, the currently best logical encodings for reasoning about different ML models, and also how to make explanations interpretable for human decision makers, among others

    Flexible and Robust Counterfactual Explanations with Minimal Satisfiable Perturbations

    Full text link
    Counterfactual explanations (CFEs) exemplify how to minimally modify a feature vector to achieve a different prediction for an instance. CFEs can enhance informational fairness and trustworthiness, and provide suggestions for users who receive adverse predictions. However, recent research has shown that multiple CFEs can be offered for the same instance or instances with slight differences. Multiple CFEs provide flexible choices and cover diverse desiderata for user selection. However, individual fairness and model reliability will be damaged if unstable CFEs with different costs are returned. Existing methods fail to exploit flexibility and address the concerns of non-robustness simultaneously. To address these issues, we propose a conceptually simple yet effective solution named Counterfactual Explanations with Minimal Satisfiable Perturbations (CEMSP). Specifically, CEMSP constrains changing values of abnormal features with the help of their semantically meaningful normal ranges. For efficiency, we model the problem as a Boolean satisfiability problem to modify as few features as possible. Additionally, CEMSP is a general framework and can easily accommodate more practical requirements, e.g., casualty and actionability. Compared to existing methods, we conduct comprehensive experiments on both synthetic and real-world datasets to demonstrate that our method provides more robust explanations while preserving flexibility.Comment: Accepted by CIKM 202

    On Tackling Explanation Redundancy in Decision Trees

    Full text link
    Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models. The interpretability of decision trees motivates explainability approaches by so-called intrinsic interpretability, and it is at the core of recent proposals for applying interpretable ML models in high-risk applications. The belief in DT interpretability is justified by the fact that explanations for DT predictions are generally expected to be succinct. Indeed, in the case of DTs, explanations correspond to DT paths. Since decision trees are ideally shallow, and so paths contain far fewer features than the total number of features, explanations in DTs are expected to be succinct, and hence interpretable. This paper offers both theoretical and experimental arguments demonstrating that, as long as interpretability of decision trees equates with succinctness of explanations, then decision trees ought not be deemed interpretable. The paper introduces logically rigorous path explanations and path explanation redundancy, and proves that there exist functions for which decision trees must exhibit paths with arbitrarily large explanation redundancy. The paper also proves that only a very restricted class of functions can be represented with DTs that exhibit no explanation redundancy. In addition, the paper includes experimental results substantiating that path explanation redundancy is observed ubiquitously in decision trees, including those obtained using different tree learning algorithms, but also in a wide range of publicly available decision trees. The paper also proposes polynomial-time algorithms for eliminating path explanation redundancy, which in practice require negligible time to compute. Thus, these algorithms serve to indirectly attain irreducible, and so succinct, explanations for decision trees

    Program synthesis from polymorphic refinement types

    Get PDF
    We present a method for synthesizing recursive functions that provably satisfy a given specification in the form of a polymorphic refinement type. We observe that such specifications are particularly suitable for program synthesis for two reasons. First, they offer a unique combination of expressive power and decidability, which enables automatic verification—and hence synthesis—of nontrivial programs. Second, a type-based specification for a program can often be effectively decomposed into independent specifications for its components, causing the synthesizer to consider fewer component combinations and leading to a combinatorial reduction in the size of the search space. At the core of our synthesis procedure is a newalgorithm for refinement type checking, which supports specification decomposition. We have evaluated our prototype implementation on a large set of synthesis problems and found that it exceeds the state of the art in terms of both scalability and usability. The tool was able to synthesize more complex programs than those reported in prior work (several sorting algorithms and operations on balanced search trees), as well as most of the benchmarks tackled by existing synthesizers, often starting from a more concise and intuitive user input.National Science Foundation (U.S.) (Grant CCF-1438969)National Science Foundation (U.S.) (Grant CCF-1139056)United States. Defense Advanced Research Projects Agency (Grant FA8750-14-2-0242

    Erklären von Erfüllbarkeitsanfragen für Softwareproduktlinien

    Get PDF
    Many analyses have been proposed to ensure the correctness of the various models used throughout software product line development. However, these analyses often merely serve to detect such circumstances without providing any means for dealing with them once encountered. To aid the software product line developer in understanding the cause of defects, a new algorithm capable of explaining satisfiability queries in a software product line context is presented in this thesis. This algorithm finds explanations by using SAT solvers to extract minimal unsatisfiable subsets from the propositional formulas that express the defects. The algorithm is applied to feature model defects such as dead features and redundant constraints, automatic truth value propagations in configurations, and preprocessor annotations that are superfluous or cause dead code blocks. Using feature models and configurations from real software product lines of varying sizes, this approach is evaluated against an existing explanation approach based on Boolean constraint propagation. The results show that Boolean constraint propagation occasionally fails to find any explanation at all but is magnitudes faster than using minimal unsatisfiable subset extractors. In response, both algorithms are combined into a single one that is as fast as Boolean constraint propagation for the cases where that finds an explanation, but also finds an explanation for all the other cases.Viele Analysen wurden vorgeschlagen, um die Korrektheit der verschiedenen in der Entwicklung von Softwareproduktlinien genutzten Modelle zu gewährleisten. Allerdings dienen diese Analysen häufig lediglich dem Erkennen solcher Umstände, ohne Mittel zu liefern, sie zu lösen, sobald sie angetroffen wurden. Um dem Entwickler der Softwareproduktlinie das Verstehen der Ursache der Defekte zu erleichtern, wird in dieser Arbeit ein neuer Algorithmus zum Erklären von Erfüllbarkeitsanfragen im Kontext von Softwareproduktlinien vorgestellt. Dieser Algorithmus findet Erklärungen, indem mittels SAT-Solvern eine minimale unerfüllbare Teilmenge aus der aussagenlogischen Formel, die den Defekt ausdrückt, extrahiert wird. Der Algorithmus wird angewandt auf Defekte in Feature-Modellen wie tote Features und redundante Constraints, automatische Resolution von Wahrheitswerten in Konfigurationen sowie Präprozessorannotationen, die überflüssig sind oder tote Code-Blocks verursachen. Dieser Ansatz wird anhand von Feature-Modellen und Konfigurationen aus echten Softwareproduktlinien verschiedener Größen gegen einen existierenden, auf Boolean-Constraint-Propagation basierenden Ansatz zum Erklären evaluiert. Die Ergebnisse zeigen, dass Boolean-Constraint-Propagation gelegentlich gar keine Erklärung findet, aber um Größenordnungen schneller als mittels Extraktoren für minimale unerfüllbare Teilmengen ist. Daraufhin werden beide Algorithmen in einem einzigen verbunden, der so schnell wie Boolean-Constraint-Propagation ist, wenn dieser eine Erklärung findet, aber auch eine Erklärung in allen übrigen Fällen findet

    Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
    • …
    corecore