102,718 research outputs found

    Unifying Functional Interpretations: Past and Future

    Full text link
    This article surveys work done in the last six years on the unification of various functional interpretations including G\"odel's dialectica interpretation, its Diller-Nahm variant, Kreisel modified realizability, Stein's family of functional interpretations, functional interpretations "with truth", and bounded functional interpretations. Our goal in the present paper is twofold: (1) to look back and single out the main lessons learnt so far, and (2) to look forward and list several open questions and possible directions for further research.Comment: 18 page

    Finite Model Finding for Parameterized Verification

    Get PDF
    In this paper we investigate to which extent a very simple and natural "reachability as deducibility" approach, originated in the research in formal methods in security, is applicable to the automated verification of large classes of infinite state and parameterized systems. The approach is based on modeling the reachability between (parameterized) states as deducibility between suitable encodings of states by formulas of first-order predicate logic. The verification of a safety property is reduced to a pure logical problem of finding a countermodel for a first-order formula. The later task is delegated then to the generic automated finite model building procedures. In this paper we first establish the relative completeness of the finite countermodel finding method (FCM) for a class of parameterized linear arrays of finite automata. The method is shown to be at least as powerful as known methods based on monotonic abstraction and symbolic backward reachability. Further, we extend the relative completeness of the approach and show that it can solve all safety verification problems which can be solved by the traditional regular model checking.Comment: 17 pages, slightly different version of the paper is submitted to TACAS 201

    12th International Workshop on Termination (WST 2012) : WST 2012, February 19–23, 2012, Obergurgl, Austria / ed. by Georg Moser

    Get PDF
    This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto

    Non uniform (hyper/multi)coherence spaces

    Full text link
    In (hyper)coherence semantics, proofs/terms are cliques in (hyper)graphs. Intuitively, vertices represent results of computations and the edge relation witnesses the ability of being assembled into a same piece of data or a same (strongly) stable function, at arrow types. In (hyper)coherence semantics, the argument of a (strongly) stable functional is always a (strongly) stable function. As a consequence, comparatively to the relational semantics, where there is no edge relation, some vertices are missing. Recovering these vertices is essential for the purpose of reconstructing proofs/terms from their interpretations. It shall also be useful for the comparison with other semantics, like game semantics. In [BE01], Bucciarelli and Ehrhard introduced a so called non uniform coherence space semantics where no vertex is missing. By constructing the co-free exponential we set a new version of this last semantics, together with non uniform versions of hypercoherences and multicoherences, a new semantics where an edge is a finite multiset. Thanks to the co-free construction, these non uniform semantics are deterministic in the sense that the intersection of a clique and of an anti-clique contains at most one vertex, a result of interaction, and extensionally collapse onto the corresponding uniform semantics.Comment: 32 page

    Knowledge Compilation of Logic Programs Using Approximation Fixpoint Theory

    Full text link
    To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 2015 Recent advances in knowledge compilation introduced techniques to compile \emph{positive} logic programs into propositional logic, essentially exploiting the constructive nature of the least fixpoint computation. This approach has several advantages over existing approaches: it maintains logical equivalence, does not require (expensive) loop-breaking preprocessing or the introduction of auxiliary variables, and significantly outperforms existing algorithms. Unfortunately, this technique is limited to \emph{negation-free} programs. In this paper, we show how to extend it to general logic programs under the well-founded semantics. We develop our work in approximation fixpoint theory, an algebraical framework that unifies semantics of different logics. As such, our algebraical results are also applicable to autoepistemic logic, default logic and abstract dialectical frameworks

    An Algebraic Framework for Compositional Program Analysis

    Full text link
    The purpose of a program analysis is to compute an abstract meaning for a program which approximates its dynamic behaviour. A compositional program analysis accomplishes this task with a divide-and-conquer strategy: the meaning of a program is computed by dividing it into sub-programs, computing their meaning, and then combining the results. Compositional program analyses are desirable because they can yield scalable (and easily parallelizable) program analyses. This paper presents algebraic framework for designing, implementing, and proving the correctness of compositional program analyses. A program analysis in our framework defined by an algebraic structure equipped with sequencing, choice, and iteration operations. From the analysis design perspective, a particularly interesting consequence of this is that the meaning of a loop is computed by applying the iteration operator to the loop body. This style of compositional loop analysis can yield interesting ways of computing loop invariants that cannot be defined iteratively. We identify a class of algorithms, the so-called path-expression algorithms [Tarjan1981,Scholz2007], which can be used to efficiently implement analyses in our framework. Lastly, we develop a theory for proving the correctness of an analysis by establishing an approximation relationship between an algebra defining a concrete semantics and an algebra defining an analysis.Comment: 15 page

    Sequentiality vs. Concurrency in Games and Logic

    Full text link
    Connections between the sequentiality/concurrency distinction and the semantics of proofs are investigated, with particular reference to games and Linear Logic.Comment: 35 pages, appeared in Mathematical Structures in Computer Scienc

    On an Intuitionistic Logic for Pragmatics

    Get PDF
    We reconsider the pragmatic interpretation of intuitionistic logic [21] regarded as a logic of assertions and their justications and its relations with classical logic. We recall an extension of this approach to a logic dealing with assertions and obligations, related by a notion of causal implication [14, 45]. We focus on the extension to co-intuitionistic logic, seen as a logic of hypotheses [8, 9, 13] and on polarized bi-intuitionistic logic as a logic of assertions and conjectures: looking at the S4 modal translation, we give a denition of a system AHL of bi-intuitionistic logic that correctly represents the duality between intuitionistic and co-intuitionistic logic, correcting a mistake in previous work [7, 10]. A computational interpretation of cointuitionism as a distributed calculus of coroutines is then used to give an operational interpretation of subtraction.Work on linear co-intuitionism is then recalled, a linear calculus of co-intuitionistic coroutines is dened and a probabilistic interpretation of linear co-intuitionism is given as in [9]. Also we remark that by extending the language of intuitionistic logic we can express the notion of expectation, an assertion that in all situations the truth of p is possible and that in a logic of expectations the law of double negation holds. Similarly, extending co-intuitionistic logic, we can express the notion of conjecture that p, dened as a hypothesis that in some situation the truth of p is epistemically necessary
    corecore