618 research outputs found

    An analysis and implementation of linear derivation strategies

    Get PDF
    This study examines the efficacy of six linear derivation strategies: (i) s-linear resolution, (ii) the ME procedure; (iii) t-linear resolution, (iv) SL -resolution, (v) the GC procedure, and (vi) SLM. The analysis is focused on the different restrictions and operations employed in each derivation strategy. The selection function, restrictive ancestor resolution, compulsory ancestor resolution on literals having atoms which are or become identical, compulsory merging operations, reuse of truncated literals, spreading of FALSE literals, no-tautologies resection, no two non-B-literals having identical atoms restriction, and the use of semantic information to trim irrelevant derivations from the search tree are the major features found In these six derivation strategies. Detecting loops and minimizing irrelevant derivations are the identified weak points of SLM. Two variations of SLM are suggested to rectify these problems. The ME procedure, SL-resolution, the GC procedure, SLM and one of the suggested variations of SLM were implemented using the Arity/Prolog compiler to produce the ME -TP, SL-TP, GC-TP, SLM-TP and SLM5-TP theorem provers respectively. In addition to the original features of each derivation strategy, the following search strategies were included in the implementations : the modified consecutively bounded depth-first search unit preference strategy, set of support strategy, pure literal elimination, tautologous clause elimination, selection function based on the computed weight of a literal, and a match check. The extension operation used by each theorem prover was extended to include subsumed unit extension and paramodulation. The performance of each theorem prover was determined. Experimental results were obtained using twenty four selected problems. The performance was measured in terms of the memory use and the execution time. A comparison of results between the five theorem provers using the, ME-TP as the basis was done. The results show that none of the theorem provers, consistently perform better than the others. Two of the selected problems were not proved by SL-TP and one problem was not proved by SLM-TP due to memory problems. The ME-TP, GC-TP and SLM5-TP proved all the selected problems. In some problems, the ME-TP and GC-TP performed better than SLM5-TP. However, the ME-TP and GC-TP had difficulties in some problems in which SLM5-TP performed well

    Implementasi Algoritma Depth Limited Search Pada Permainan Peg Solitaire

    Full text link
    Permainan Peg Solitaire adalah permainan single player yang terdiri dari sebuah papan dan sejumlah kelereng. Papan permainan Peg Solitaire terdiri dari banyak jenis antara lain papan jenis inggris, eropa, triangular dan masih banyak jenis papan permainan Peg Solitaire yang lain. Pemain permainan Peg Solitaire terkadang sulit menentukan keputusan langkah yang tepat. Oleh karena itu, disediakan bantuan berupa hint yang membantu pemain saat pemain menentukan langkah. Salah satu algoritma yang dapat diterapkan pada hint permainan Peg Solitaire adalah algoritma Depth Limited Search. Penerapan algoritma Depth Limited Search pada hint permainan Peg Solitaire di papan permainan versi inggris ukuran 3 x 3 dan triangular berukuran 4 x 4, 5 x 5, serta 7 x 7, mampu menemukan solusi yaitu sisa satu kelereng serta mampu menangani apabila tidak menemukan solusi. Penerapan algoritma Depth Limited Search pun mampu menampilkan semua perpindahan langkah hingga ditemukan sisa 1 kelereng. Hal ini dibuktikan dengan cara menguji 10 soal pada sistem. Dari hasil pengujian 10 soal pada sistem, 9 soal berhasil diselesaikan dan 1 soal gagal diselesaikan karena tidak menemukan solusi berupa sisa 1 kelereng

    12th International Workshop on Termination (WST 2012) : WST 2012, February 19–23, 2012, Obergurgl, Austria / ed. by Georg Moser

    Get PDF
    This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto

    Representing scope in intuitionistic deductions

    Get PDF
    AbstractIntuitionistic proofs can be segmented into scopes which describe when assumptions can be used. In standard descriptions of intuitionistic logic, these scopes occupy contiguous regions of proofs. This leads to an explosion in the search space for automated deduction, because of the difficulty of planning to apply a rule inside a particular scoped region of the proof. This paper investigates an alternative representation which assigns scope explicitly to formulas, and which is inspired in part by semantics-based translation methods for modal deduction. This calculus is simple and is justified by direct proof-theoretic arguments that transform proofs in the calculus so that scopes match standard descriptions. A Herbrand theorem, established straightforwardly, lifts this calculus to incorporate unification. The resulting system has no impermutabilities whatsoever — rules of inference may be used equivalently anywhere in the proof. Nevertheless, a natural specification describes how λ-terms are to be extracted from its deductions

    Boundary Algebra: A Simpler Approach to Boolean Algebra and the Sentential Connectives

    Get PDF
    Boundary algebra [BA] is a algebra of type , and a simplified notation for Spencer-Brown’s (1969) primary algebra. The syntax of the primary arithmetic [PA] consists of two atoms, () and the blank page, concatenation, and enclosure between ‘(‘ and ‘)’, denoting the primitive notion of distinction. Inserting letters denoting, indifferently, the presence or absence of () into a PA formula yields a BA formula. The BA axioms are A1: ()()= (), and A2: “(()) [abbreviated ‘⊥’] may be written or erased at will,” implying (⊥)=(). The repeated application of A1 and A2 simplifies any PA formula to either () or ⊥. The basis for BA is B1: abc=bca (concatenation commutes & associates); B2, ⊥a=a (BA has a lower bound, ⊥); B3, (a)a=() (BA is a complemented lattice); and B4, (ba)a=(b)a (implies that BA is a distributive lattice). BA has two intended models: (1) the Boolean algebra 2 with base set B={(),⊥}, such that () ⇔ 1 [dually 0], (a) ⇔ a′, and ab ⇔ a∪b [a∩b]; and (2) sentential logic, such that () ⇔ true [false], (a) ⇔ ~a, and ab ⇔ a∨b [a∧b]. BA is a self-dual notation, facilitates a calculational style of proof, and simplifies clausal reasoning and Quine’s truth value analysis. BA resembles C.S. Peirce’s graphical logic, the symbolic logics of Leibniz and W.E. Johnson, the 2 notation of Byrne (1946), and the Boolean term schemata of Quine (1982).Boundary algebra; boundary logic; primary algebra; primary arithmetic; Boolean algebra; calculation proof; G. Spencer-Brown; C.S. Peirce; existential graphs

    Highly Automated Formal Verification of Arithmetic Circuits

    Get PDF
    This dissertation investigates the problems of two distinctive formal verification techniques for verifying large scale multiplier circuits and proposes two approaches to overcome some of these problems. The first technique is equivalence checking based on recurrence relations, while the second one is the symbolic computation technique which is based on the theory of Gröbner bases. This investigation demonstrates that approaches based on symbolic computation have better scalability and more robustness than state-of-the-art equivalence checking techniques for verification of arithmetic circuits. According to this conclusion, the thesis leverages the symbolic computation technique to verify floating-point designs. It proposes a new algebraic equivalence checking, in contrast to classical combinational equivalence checking, the proposed technique is capable of checking the equivalence of two circuits which have different architectures of arithmetic units as well as control logic parts, e.g., floating-point multipliers

    Boundary Algebra: A Simple Notation for Boolean Algebra and the Truth Functors

    Get PDF
    Boundary algebra [BA] is a simpler notation for Spencer-Brown’s (1969) primary algebra [pa], the Boolean algebra 2, and the truth functors. The primary arithmetic [PA] consists of the atoms ‘()’ and the blank page, concatenation, and enclosure between ‘(‘ and ‘)’, denoting the primitive notion of distinction. Inserting letters denoting the presence or absence of () into a PA formula yields a BA formula. The BA axioms are "()()=()" (A1), and "(()) [=?] may be written or erased at will” (A2). Repeated application of these axioms to a PA formula yields a member of B= {(),?} called its simplification. (a) has two intended interpretations: (a) ? a? (Boolean algebra 2), and (a) ? ~a (sentential logic). BA is self-dual: () ? 1 [dually 0] so that B is the carrier for 2, ab ? a?b [a?b], and (a)b [(a(b))] ? a=b, so that ?=() [()=?] follows trivially and B is a poset. The BA basis abc= bca (Dilworth 1938), a(ab)= a(b), and a()=() (Bricken 2002) facilitates clausal reasoning and proof by calculation. BA also simplifies normal forms and Quine’s (1982) truth value analysis. () ? true [false] yields boundary logic.G. Spencer Brown; boundary algebra; boundary logic; primary algebra; primary arithmetic; Boolean algebra; calculation proof; C.S. Peirce; existential graphs.

    Formal Methods for Trustworthy Voting Systems : From Trusted Components to Reliable Software

    Get PDF
    Voting is prominently an important part of democratic societies, and its outcome may have a dramatic and broad impact on societal progress. Therefore, it is paramount that such a society has extensive trust in the electoral process, such that the system’s functioning is reliable and stable with respect to the expectations within society. Yet, with or without the use of modern technology, voting is full of algorithmic and security challenges, and the failure to address these challenges in a controlled manner may produce fundamental flaws in the voting system and potentially undermine critical societal aspects. In this thesis, we argue for a development process of voting systems that is rooted in and assisted by formal methods that produce transparently checkable evidence for the guarantees that the final system should provide so that it can be deemed trustworthy. The goal of this thesis is to advance the state of the art in formal methods that allow to systematically develop trustworthy voting systems that can be provenly verified. In the literature, voting systems are modeled in the following four comparatively separable and distinguishable layers: (1) the physical layer, (2) the computational layer, (3) the election layer, and (4) the human layer. Current research usually either mostly stays within one of those layers or lacks machine-checkable evidence, and consequently, trusted and understandable criteria often lack formally proven and checkable guarantees on software-level and vice versa. The contributions in this work are formal methods that fill in the trust gap between the principal election layer and the computational layer by a reliable translation of trusted and understandable criteria into trustworthy software. Thereby, we enable that executable procedures can be formally traced back and understood by election experts without the need for inspection on code level, and trust can be preserved to the trustworthy system. The works in this thesis all contribute to this end and consist in five distinct contributions, which are the following: (I) a method for the generation of secure card-based communication schemes, (II) a method for the synthesis of reliable tallying procedures, (III) a method for the efficient verification of reliable tallying procedures, (IV) a method for the computation of dependable election margins for reliable audits, (V) a case study about the security verification of the GI voter-anonymization software. These contributions span formal methods on illustrative examples for each of the three principal components, (1) voter-ballot box communication, (2) election method, and (3) election management, between the election layer and the computational layer. Within the first component, the voter-ballot box communication channel, we build a bridge from the communication channel to the cryptography scheme by automatically generating secure card-based schemes from a small formal model with a parameterization of the desired security requirements. For the second component, the election method, we build a bridge from the election method to the tallying procedure by (1) automatically synthesizing a runnable tallying procedure from the desired requirements given as properties that capture the desired intuitions or regulations of fairness considerations, (2) automatically generating either comprehensible arguments or bounded proofs to compare tallying procedures based on user-definable fairness properties, and (3) automatically computing concrete election margins for a given tallying procedure, the collected ballots, and the computed election result, that enable efficient election audits. Finally, for the third and final component, the election management system, we perform a case study and apply state-of-the-art verification technology to a real-world e-voting system that has been used for the annual elections of the German Informatics Society (GI – “Gesellschaft für Informatik”) in 2019. The case study consists in the formal implementation-level security verification that the voter identities are securely anonymized and the voters’ passwords cannot be leaked. The presented methods assist the systematic development and verification of provenly trustworthy voting systems across traditional layers, i.e., from the election layer to the computational layer. They all pursue the goal of making voting systems trustworthy by reliable and explainable formal requirements. We evaluate the devised methods on minimal card-based protocols that compute a secure AND function for two different decks of cards, a classical knock-out tournament and several Condorcet rules, various plurality, scoring, and Condorcet rules from the literature, the Danish national parliamentary elections in 2015, and a state-of-the-art electronic voting system that is used for the German Informatics Society’s annual elections in 2019 and following

    Proof Theory at Work: Complexity Analysis of Term Rewrite Systems

    Full text link
    This thesis is concerned with investigations into the "complexity of term rewriting systems". Moreover the majority of the presented work deals with the "automation" of such a complexity analysis. The aim of this introduction is to present the main ideas in an easily accessible fashion to make the result presented accessible to the general public. Necessarily some technical points are stated in an over-simplified way.Comment: Cumulative Habilitation Thesis, submitted to the University of Innsbruc

    A linear relaxation technique for the position analysis of multi-loop linkages

    Get PDF
    This report presents a new method able to isolate all configurations that a multi-loop linkage can adopt. We tackle the problem by providing formulation and resolution techniques that fit particularly well together. The adopted formulation yields a system of simple equations (only containing linear and bilinear terms, and trivial trigonometric functions for the helical pair exclusively) whose special structure is later exploited by a branch-and-prune method based on linear relaxations. The method is general, as it can be applied to linkages with single or multiple loops with arbitrary topology, involving lower pairs of any kind, and complete, as all possible solutions get accurately bounded, irrespectively of whether the linkage is rigid or mobile
    corecore