12 research outputs found

    The KB paradigm and its application to interactive configuration

    Full text link
    The knowledge base paradigm aims to express domain knowledge in a rich formal language, and to use this domain knowledge as a knowledge base to solve various problems and tasks that arise in the domain by applying multiple forms of inference. As such, the paradigm applies a strict separation of concerns between information and problem solving. In this paper, we analyze the principles and feasibility of the knowledge base paradigm in the context of an important class of applications: interactive configuration problems. In interactive configuration problems, a configuration of interrelated objects under constraints is searched, where the system assists the user in reaching an intended configuration. It is widely recognized in industry that good software solutions for these problems are very difficult to develop. We investigate such problems from the perspective of the KB paradigm. We show that multiple functionalities in this domain can be achieved by applying different forms of logical inferences on a formal specification of the configuration domain. We report on a proof of concept of this approach in a real-life application with a banking company. To appear in Theory and Practice of Logic Programming (TPLP).Comment: To appear in Theory and Practice of Logic Programming (TPLP

    The Value 1 Problem Under Finite-memory Strategies for Concurrent Mean-payoff Games

    Get PDF
    We consider concurrent mean-payoff games, a very well-studied class of two-player (player 1 vs player 2) zero-sum games on finite-state graphs where every transition is assigned a reward between 0 and 1, and the payoff function is the long-run average of the rewards. The value is the maximal expected payoff that player 1 can guarantee against all strategies of player 2. We consider the computation of the set of states with value 1 under finite-memory strategies for player 1, and our main results for the problem are as follows: (1) we present a polynomial-time algorithm; (2) we show that whenever there is a finite-memory strategy, there is a stationary strategy that does not need memory at all; and (3) we present an optimal bound (which is double exponential) on the patience of stationary strategies (where patience of a distribution is the inverse of the smallest positive probability and represents a complexity measure of a stationary strategy)

    Core-guided minimal correction set and core enumeration

    Get PDF
    A set of constraints is unsatisfiable if there is no solution that satisfies these constraints. To analyse unsatisfiable problems, the user needs to understand where inconsistencies come from and how they can be repaired. Minimal unsatisfiable cores and correction sets are important subsets of constraints that enable such analysis. In this work, we propose a new algorithm for extracting minimal unsatisfiable cores and correction sets simultaneously. Building on top of the relaxation and strengthening framework, we introduce novel techniques for extracting these sets. Our new solver significantly outperforms several state of the art algorithms on common benchmarks when it comes to extracting correction sets and compares favorably on core extraction.Peer ReviewedPostprint (published version

    Synthesizing Adaptive Test Strategies from Temporal Logic Specifications

    Full text link
    Constructing good test cases is difficult and time-consuming, especially if the system under test is still under development and its exact behavior is not yet fixed. We propose a new approach to compute test strategies for reactive systems from a given temporal logic specification using formal methods. The computed strategies are guaranteed to reveal certain simple faults in every realization of the specification and for every behavior of the uncontrollable part of the system's environment. The proposed approach supports different assumptions on occurrences of faults (ranging from a single transient fault to a persistent fault) and by default aims at unveiling the weakest one. Based on well-established hypotheses from fault-based testing, we argue that such tests are also sensitive for more complex bugs. Since the specification may not define the system behavior completely, we use reactive synthesis algorithms with partial information. The computed strategies are adaptive test strategies that react to behavior at runtime. We work out the underlying theory of adaptive test strategy synthesis and present experiments for a safety-critical component of a real-world satellite system. We demonstrate that our approach can be applied to industrial specifications and that the synthesized test strategies are capable of detecting bugs that are hard to detect with random testing

    Refined Core Relaxations for Core-Guided Maximum Satisfiability Algorithms

    Get PDF
    The so-called declarative approach has proven to be a viable paradigm for solving various real-world NP-hard optimization problems in practice. In the declarative approach, the problem at hand is encoded using a mathematical constraint language, and an algorithm for the specific language is employed to obtain optimal solutions to an instance of the problem. One of the most viable declarative optimization paradigms of the last years is maximum satisfiability (MaxSAT) with propositional logic as the constraint language. So-called core-guided MaxSAT algorithms are arguably one of the most effective MaxSAT-solving paradigms in practice today. Core-guided algorithms iteratively detect and rule out (relax) sources of inconsistencies (so-called unsatisfiable cores) in the instance being solved. Especially effective are recent algorithmic variants of the core-guided approach which employ so-called soft cardinality constraints for ruling out inconsistencies. In this thesis, we present a structure-sharing technique for the cardinality-based core relaxation steps performed by core-guided MaxSAT solvers. The technique aims at reducing the inherent growth in the size of the propositional formula resulting from the core relaxation steps. Additionally, it enables more efficient reasoning over the relationships between different cores. We empirically evaluate the proposed technique on two different core-guided algorithms and provide open-source implementations of our solvers employing the technique. Our results show that the proposed structure-sharing can improve the performance of the algorithms both in theory and in practice

    On the Design and Misuse of Microcoded (Embedded) Processors — A Cautionary Note

    Get PDF
    Today\u27s microprocessors often rely on microcode updates to address issues such as security or functional patches. Unfortunately, microcode update flexibility opens up new attack vectors through malicious microcode alterations. Such attacks share many features with hardware Trojans and have similar devastating consequences for system security. However, due to microcode\u27s opaque nature, little is known in the open literature about the capabilities and limitations of microcode Trojans. We introduce the design of a microcoded RISC-V processor architecture together with a microcode development and evaluation environment. Even though microcode typically has almost complete control of the processor hardware, the design of meaningful microcode Trojans is not straightforward. This somewhat counter-intuitive insight is due to the lack of information at the hardware level about the semantics of executed software. In three security case studies we demonstrate how to overcome these issues and give insights on how to design meaningful microcode Trojans that undermine system security. To foster future research and applications, we publicly release our implementation and evaluation platform

    Red Team vs. Blue Team: A Real-World Hardware Trojan Detection Case Study Across Four Modern CMOS Technology Generations

    Get PDF
    Verifying the absence of maliciously inserted Trojans in ICs is a crucial task – especially for security-enabled products. Depending on the concrete threat model, different techniques can be applied for this purpose. Assuming that the original IC layout is benign and free of backdoors, the primary security threats are usually identified as the outsourced manufacturing and transportation. To ensure the absence of Trojans in commissioned chips, one straightforward solution is to compare the received semiconductor devices to the design files that were initially submitted to the foundry. Clearly, conducting such a comparison requires advanced laboratory equipment and qualified experts. Nevertheless, the fundamental techniques to detect Trojans which require evident changes to the silicon layout are nowadays well-understood. Despite this, there is a glaring lack of public case studies describing the process in its entirety while making the underlying datasets publicly available. In this work, we aim to improve upon this state of the art by presenting a public and open hardware Trojan detection case study based on four different digital ICs using a Red Team vs. Blue Team approach. Hereby, the Red Team creates small changes acting as surrogates for inserted Trojans in the layouts of 90 nm, 65 nm, 40 nm, and 28 nm ICs. The quest of the Blue Team is to detect all differences between digital layout and manufactured device by means of a GDSII–vs–SEM-image comparison. Can the Blue Team perform this task efficiently? Our results spark optimism for the Trojan seekers and answer common questions about the efficiency of such techniques for relevant IC sizes. Further, they allow to draw conclusions about the impact of technology scaling on the detection performance

    Rapport annuel 2008-2009

    Get PDF

    Rapport annuel 2007-2008

    Get PDF
    corecore