20,475 research outputs found

    Axiom directed Focusing

    Get PDF
    Long versionInternational audienceSuperdeduction and deduction modulo are methods specially designed to ease the use of first-order theories in predicate logic. Superdeduction modulo, which combines both, enables the user to make a distinct use of computational and reasoning axioms. Although soundness is ensured, using superdeduction and deduction modulo to extend deduction with awkward theories can jeopardize essential properties of the extended system such as cut-elimination or completeness \wrt~predicate logic. Therefore one has to design criteria for theories which can safely be used through superdeduction and deduction modulo. In this paper we revisit the superdeduction paradigm by comparing it with the focusing approach. In particular we prove a focalization theorem for cut-free superdeduction modulo: we show that permutations of inference rules can transform any cut-free proof in deduction modulo into a cut-free proof in superdeduction modulo and conversely, provided that some hypotheses on the synchrony of reasoning axioms are verified. It implies that cut-elimination for deduction modulo and for superdeduction modulo are equivalent. Since several criteria have already been proposed for theories that do not break cut-elimination of the corresponding deduction modulo system, these criteria also imply cut-elimination of the superdeduction modulo system, provided our synchrony hypotheses hold. Finally we design a tableaux method for superdeduction modulo which is sound and complete provided cut-elimination holds

    The exp-log normal form of types

    Get PDF
    Lambda calculi with algebraic data types lie at the core of functional programming languages and proof assistants, but conceal at least two fundamental theoretical problems already in the presence of the simplest non-trivial data type, the sum type. First, we do not know of an explicit and implemented algorithm for deciding the beta-eta-equality of terms---and this in spite of the first decidability results proven two decades ago. Second, it is not clear how to decide when two types are essentially the same, i.e. isomorphic, in spite of the meta-theoretic results on decidability of the isomorphism. In this paper, we present the exp-log normal form of types---derived from the representation of exponential polynomials via the unary exponential and logarithmic functions---that any type built from arrows, products, and sums, can be isomorphically mapped to. The type normal form can be used as a simple heuristic for deciding type isomorphism, thanks to the fact that it is a systematic application of the high-school identities. We then show that the type normal form allows to reduce the standard beta-eta equational theory of the lambda calculus to a specialized version of itself, while preserving the completeness of equality on terms. We end by describing an alternative representation of normal terms of the lambda calculus with sums, together with a Coq-implemented converter into/from our new term calculus. The difference with the only other previously implemented heuristic for deciding interesting instances of eta-equality by Balat, Di Cosmo, and Fiore, is that we exploit the type information of terms substantially and this often allows us to obtain a canonical representation of terms without performing sophisticated term analyses

    Graph Based Reduction of Program Verification Conditions

    Get PDF
    Increasing the automaticity of proofs in deductive verification of C programs is a challenging task. When applied to industrial C programs known heuristics to generate simpler verification conditions are not efficient enough. This is mainly due to their size and a high number of irrelevant hypotheses. This work presents a strategy to reduce program verification conditions by selecting their relevant hypotheses. The relevance of a hypothesis is determined by the combination of a syntactic analysis and two graph traversals. The first graph is labeled by constants and the second one by the predicates in the axioms. The approach is applied on a benchmark arising in industrial program verification

    Metamodel-based model conformance and multiview consistency checking

    Get PDF
    Model-driven development, using languages such as UML and BON, often makes use of multiple diagrams (e.g., class and sequence diagrams) when modeling systems. These diagrams, presenting different views of a system of interest, may be inconsistent. A metamodel provides a unifying framework in which to ensure and check consistency, while at the same time providing the means to distinguish between valid and invalid models, that is, conformance. Two formal specifications of the metamodel for an object-oriented modeling language are presented, and it is shown how to use these specifications for model conformance and multiview consistency checking. Comparisons are made in terms of completeness and the level of automation each provide for checking multiview consistency and model conformance. The lessons learned from applying formal techniques to the problems of metamodeling, model conformance, and multiview consistency checking are summarized

    Splitting Proofs for Interpolation

    Full text link
    We study interpolant extraction from local first-order refutations. We present a new theoretical perspective on interpolation based on clearly separating the condition on logical strength of the formula from the requirement on the com- mon signature. This allows us to highlight the space of all interpolants that can be extracted from a refutation as a space of simple choices on how to split the refuta- tion into two parts. We use this new insight to develop an algorithm for extracting interpolants which are linear in the size of the input refutation and can be further optimized using metrics such as number of non-logical symbols or quantifiers. We implemented the new algorithm in first-order theorem prover VAMPIRE and evaluated it on a large number of examples coming from the first-order proving community. Our experiments give practical evidence that our work improves the state-of-the-art in first-order interpolation.Comment: 26th Conference on Automated Deduction, 201

    Compensations in the Shapley value and the compensation solutions for graph games

    Get PDF
    We consider an alternative expression of the Shapley value that reveals a system of compensations: each player receives an equal share of the worth of each coalition he belongs to, and has to compensate an equal share of the worth of any coalition he does not belong to. We give an interpretation in terms of formation of the grand coalition according to an ordering of the players and define the corresponding compensation vector. Then, we generalize this idea to cooperative games with a communication graph. Firstly, we consider cooperative games with a forest (cycle-free graph). We extend the compensation vector by considering all rooted spanning trees of the forest (see Demange 2004) instead of orderings of the players. The associated allocation rule, called the compensation solution, is characterized by component efficiency and relative fairness. The latter axiom takes into account the relative position of a player with respect to his component. Secondly, we consider cooperative games with arbitrary graphs and construct rooted spanning trees by using the classical algorithms DFS and BFS. If the graph is complete, we show that the compensation solutions associated with DFS and BFS coincide with the Shapley value and the equal surplus division respectively.
    • …
    corecore