34 research outputs found

    Reordering Rule Makes OBDD Proof Systems Stronger

    Get PDF
    Atserias, Kolaitis, and Vardi showed that the proof system of Ordered Binary Decision Diagrams with conjunction and weakening, OBDD(^, weakening), simulates CP^* (Cutting Planes with unary coefficients). We show that OBDD(^, weakening) can give exponentially shorter proofs than dag-like cutting planes. This is proved by showing that the Clique-Coloring tautologies have polynomial size proofs in the OBDD(^, weakening) system. The reordering rule allows changing the variable order for OBDDs. We show that OBDD(^, weakening, reordering) is strictly stronger than OBDD(^, weakening). This is proved using the Clique-Coloring tautologies, and by transforming tautologies using coded permutations and orification. We also give CNF formulas which have polynomial size OBDD(^) proofs but require superpolynomial (actually, quasipolynomial size) resolution proofs, and thus we partially resolve an open question proposed by Groote and Zantema. Applying dag-like and tree-like lifting techniques to the mentioned results, we completely analyze which of the systems among CP^*, OBDD(^), OBDD(^, reordering), OBDD(^, weakening) and OBDD(^, weakening, reordering) polynomially simulate each other. For dag-like proof systems, some of our separations are quasipolynomial and some are exponential; for tree-like systems, all of our separations are exponential

    Proof Complexity of Systems of (Non-Deterministic) Decision Trees and Branching Programs

    Get PDF
    This paper studies propositional proof systems in which lines are sequents of decision trees or branching programs, deterministic or non-deterministic. Decision trees (DTs) are represented by a natural term syntax, inducing the system LDT, and non-determinism is modelled by including disjunction, ?, as primitive (system LNDT). Branching programs generalise DTs to dag-like structures and are duly handled by extension variables in our setting, as is common in proof complexity (systems eLDT and eLNDT). Deterministic and non-deterministic branching programs are natural nonuniform analogues of log-space (L) and nondeterministic log-space (NL), respectively. Thus eLDT and eLNDT serve as natural systems of reasoning corresponding to L and NL, respectively. The main results of the paper are simulation and non-simulation results for tree-like and dag-like proofs in LDT, LNDT, eLDT and eLNDT. We also compare them with Frege systems, constant-depth Frege systems and extended Frege systems

    Separating Incremental and Non-Incremental Bottom-Up Compilation

    Get PDF
    The aim of a compiler is, given a function represented in some language, to generate an equivalent representation in a target language L. In bottom-up (BU) compilation of functions given as CNF formulas, constructing the new representation requires compiling several subformulas in L. The compiler starts by compiling the clauses in L and iteratively constructs representations for new subformulas using an "Apply" operator that performs conjunction in L, until all clauses are combined into one representation. In principle, BU compilation can generate representations for any subformulas and conjoin them in any way. But an attractive strategy from a practical point of view is to augment one main representation - which we call the core - by conjoining to it the clauses one at a time. We refer to this strategy as incremental BU compilation. We prove that, for known relevant languages L for BU compilation, there is a class of CNF formulas that admit BU compilations to L that generate only polynomial-size intermediate representations, while their incremental BU compilations all generate an exponential-size core

    Model checking multi-agent systems

    Get PDF
    A multi-agent system (MAS) is usually understood as a system composed of interacting autonomous agents. In this sense, MAS have been employed successfully as a modelling paradigm in a number of scenarios, especially in Computer Science. However, the process of modelling complex and heterogeneous systems is intrinsically prone to errors: for this reason, computer scientists are typically concerned with the issue of verifying that a system actually behaves as it is supposed to, especially when a system is complex. Techniques have been developed to perform this task: testing is the most common technique, but in many circumstances a formal proof of correctness is needed. Techniques for formal verification include theorem proving and model checking. Model checking techniques, in particular, have been successfully employed in the formal verification of distributed systems, including hardware components, communication protocols, security protocols. In contrast to traditional distributed systems, formal verification techniques for MAS are still in their infancy, due to the more complex nature of agents, their autonomy, and the richer language used in the specification of properties. This thesis aims at making a contribution in the formal verification of properties of MAS via model checking. In particular, the following points are addressed: • Theoretical results about model checking methodologies for MAS, obtained by extending traditional methodologies based on Ordered Binary Decision Diagrams (OBDDS) for temporal logics to multi-modal logics for time, knowledge, correct behaviour, and strategies of agents. Complexity results for model checking these logics (and their symbolic representations). • Development of a software tool (MCMAS) that permits the specification and verification of MAS described in the formalism of interpreted systems. • Examples of application of MCMAS to various MAS scenarios (communication, anonymity, games, hardware diagnosability), including experimental results, and comparison with other tools available

    Lightweight cryptography on ultra-constrained RFID devices

    Full text link
    Devices of extremely small computational power like RFID tags are used in practice to a rapidly growing extent, a trend commonly referred to as ubiquitous computing. Despite their severely constrained resources, the security burden which these devices have to carry is often enormous, as their fields of application range from everyday access control to human-implantable chips providing sensitive medical information about a person. Unfortunately, established cryptographic primitives such as AES are way to 'heavy' (e.g., in terms of circuit size or power consumption) to be used in corresponding RFID systems, calling for new solutions and thus initiating the research area of lightweight cryptography. In this thesis, we focus on the currently most restricted form of such devices and will refer to them as ultra-constrained RFIDs. To fill this notion with life and in order to create a profound basis for our subsequent cryptographic development, we start this work by providing a comprehensive summary of conditions that should be met by lightweight cryptographic schemes targeting ultra-constrained RFID devices. Building on these insights, we then turn towards the two main topics of this thesis: lightweight authentication and lightweight stream ciphers. To this end, we first provide a general introduction to the broad field of authentication and study existing (allegedly) lightweight approaches. Drawing on this, with the (n,k,L)^-protocol, we suggest our own lightweight authentication scheme and, on the basis of corresponding hardware implementations for FPGAs and ASICs, demonstrate its suitability for ultra-constrained RFIDs. Subsequently, we leave the path of searching for dedicated authentication protocols and turn towards stream cipher design, where we first revisit some prominent classical examples and, in particular, analyze their state initialization algorithms. Following this, we investigate the rather young area of small-state stream ciphers, which try to overcome the limit imposed by time-memory-data tradeoff (TMD-TO) attacks on the security of classical stream ciphers. Here, we present some new attacks, but also corresponding design ideas how to counter these. Paving the way for our own small-state stream cipher, we then propose and analyze the LIZARD-construction, which combines the explicit use of packet mode with a new type of state initialization algorithm. For corresponding keystream generator-based designs of inner state length n, we prove a tight (2n/3)-bound on the security against TMD-TO key recovery attacks. Building on these theoretical results, we finally present LIZARD, our new lightweight stream cipher for ultra-constrained RFIDs. Its hardware efficiency and security result from combining a Grain-like design with the LIZARD-construction. Most notably, besides lower area requirements, the estimated power consumption of LIZARD is also about 16 percent below that of Grain v1, making it particularly suitable for passive RFID tags, which obtain their energy exclusively through an electromagnetic field radiated by the reading device. The thesis is concluded by an extensive 'Future Research Directions' chapter, introducing various new ideas and thus showing that the search for lightweight cryptographic solutions is far from being completed

    Symmetry and induction in model checking

    Full text link

    Efficient local search for Pseudo Boolean Optimization

    Get PDF
    Algorithms and the Foundations of Software technolog

    Abstraction in Model Checking Multi-Agent Systems

    No full text
    This thesis presents existential abstraction techniques for multi-agent systems preserving temporal-epistemic specifications. Multi-agent systems, defined in the interpreted system frameworks, are abstracted by collapsing the local states and actions of each agent. The goal of abstraction is to reduce the state space of the system under investigation in order to cope with the state explosion problem that impedes the verification of very large state space systems. Theoretical results show that the resulting abstract system simulates the concrete one. Preservation and correctness theorems are proved in this thesis. These theorems assure that if a temporal-epistemic formula holds on the abstract system, then the formula also holds on the concrete one. These results permit to verify temporal-epistemic formulas in abstract systems instead of the concrete ones, therefore saving time and space in the verification process. In order to test the applicability, usefulness, suitability, power and effectiveness of the abstraction method presented, two different implementations are presented: a tool for data-abstraction and one for variable-abstraction. The first technique achieves a state space reduction by collapsing the values of the domains of the system variables. The second technique performs a reduction on the size of the model by collapsing groups of two or more variables. Therefore, the abstract system has a reduced number of variables. Each new variable in the abstract system takes values belonging to a new domain built automatically by the tool. Both implementations perform abstraction in a fully automatic way. They operate on multi agents models specified in a formal language, called ISPL (Interpreted System Programming Language). This is the input language for MCMAS, a model checker for multi-agent systems. The output is an ISPL file as well (with a reduced state space). This thesis also presents several suitable temporal-epistemic examples to evaluate both techniques. The experiments show good results and point to the attractiveness of the temporal-epistemic abstraction techniques developed in this thesis. In particular, the contributions of the thesis are the following ones: • We produced correctness and preservation theoretical results for existential abstraction. • We introduced two algorithms to perform data-abstraction and variable-abstraction on multi-agent systems. • We developed two software toolkits for automatic abstraction on multi-agent scenarios: one tool performing data-abstraction and the second performing variable-abstraction. • We evaluated the methodologies introduced in this thesis by running experiments on several multi-agent system examples

    Selected Topics in Network Optimization: Aligning Binary Decision Diagrams for a Facility Location Problem and a Search Method for Dynamic Shortest Path Interdiction

    Get PDF
    This work deals with three different combinatorial optimization problems: minimizing the total size of a pair of binary decision diagrams (BDDs) under a certain structural property, a variant of the facility location problem, and a dynamic version of the Shortest-Path Interdiction (DSPI) problem. However, these problems all have the following core idea in common: They all stem from representing an optimization problem as a decision diagram. We begin from cases in which such a diagram representation of reasonable size might exist, but finding a small diagram is difficult to achieve. The first problem develops a heuristic for enforcing a structural property for a collection of BDDs, which allows them to be merged into a single one efficiently. In the second problem, we consider a specific combinatorial problem that allows for a natural representation by a pair of BDDs. We use the previous result and ideas developed earlier in the literature to reformulate this problem as a linear program over a single BDD. This approach enables us to obtain sensitivity information, while often enjoying runtimes comparable to a mixed integer program solved with a commercial solver, after we pay the computational overhead of building the diagram (e.g., when re-solving the problem using different costs, but the same graph topology). In the last part, we examine DSPI, for which building the full decision diagram is generally impractical. We formalize the concept of a game tree for the DSPI and design a heuristic based on the idea of building only selected parts of this exponentially-sized decision diagram (which is not binary any more). We use a Monte Carlo Tree Search framework to establish policies that are near optimal. To mitigate the size of the game tree, we leverage previously derived bounds for the DSPI and employ an alpha–beta pruning technique for minimax optimization. We highlight the practicality of these ideas in a series of numerical experiments
    corecore