1,439 research outputs found

    On the logical definability of certain graph and poset languages

    Full text link
    We show that it is equivalent, for certain sets of finite graphs, to be definable in CMS (counting monadic second-order logic, a natural extension of monadic second-order logic), and to be recognizable in an algebraic framework induced by the notion of modular decomposition of a finite graph. More precisely, we consider the set F_F\_\infty of composition operations on graphs which occur in the modular decomposition of finite graphs. If FF is a subset of F_F\_{\infty}, we say that a graph is an \calF-graph if it can be decomposed using only operations in FF. A set of FF-graphs is recognizable if it is a union of classes in a finite-index equivalence relation which is preserved by the operations in FF. We show that if FF is finite and its elements enjoy only a limited amount of commutativity -- a property which we call weak rigidity, then recognizability is equivalent to CMS-definability. This requirement is weak enough to be satisfied whenever all FF-graphs are posets, that is, transitive dags. In particular, our result generalizes Kuske's recent result on series-parallel poset languages

    A synchronous program algebra: a basis for reasoning about shared-memory and event-based concurrency

    Full text link
    This research started with an algebra for reasoning about rely/guarantee concurrency for a shared memory model. The approach taken led to a more abstract algebra of atomic steps, in which atomic steps synchronise (rather than interleave) when composed in parallel. The algebra of rely/guarantee concurrency then becomes an instantiation of the more abstract algebra. Many of the core properties needed for rely/guarantee reasoning can be shown to hold in the abstract algebra where their proofs are simpler and hence allow a higher degree of automation. The algebra has been encoded in Isabelle/HOL to provide a basis for tool support for program verification. In rely/guarantee concurrency, programs are specified to guarantee certain behaviours until assumptions about the behaviour of their environment are violated. When assumptions are violated, program behaviour is unconstrained (aborting), and guarantees need no longer hold. To support these guarantees a second synchronous operator, weak conjunction, was introduced: both processes in a weak conjunction must agree to take each atomic step, unless one aborts in which case the whole aborts. In developing the laws for parallel and weak conjunction we found many properties were shared by the operators and that the proofs of many laws were essentially the same. This insight led to the idea of generalising synchronisation to an abstract operator with only the axioms that are shared by the parallel and weak conjunction operator, so that those two operators can be viewed as instantiations of the abstract synchronisation operator. The main differences between parallel and weak conjunction are how they combine individual atomic steps; that is left open in the axioms for the abstract operator.Comment: Extended version of a Formal Methods 2016 paper, "An algebra of synchronous atomic steps

    A synchronous program algebra: a basis for reasoning about shared-memory and event-based concurrency

    Get PDF
    This research started with an algebra for reasoning about rely/guarantee concurrency for a shared memory model. The approach taken led to a more abstract algebra of atomic steps, in which atomic steps synchronise (rather than interleave) when composed in parallel. The algebra of rely/guarantee concurrency then becomes an instantiation of the more abstract algebra. Many of the core properties needed for rely/guarantee reasoning can be shown to hold in the abstract algebra where their proofs are simpler and hence allow a higher degree of automation. The algebra has been encoded in Isabelle/HOL to provide a basis for tool support for program verification. In rely/guarantee concurrency, programs are specified to guarantee certain behaviours until assumptions about the behaviour of their environment are violated. When assumptions are violated, program behaviour is unconstrained (aborting), and guarantees need no longer hold. To support these guarantees a second synchronous operator, weak conjunction, was introduced: both processes in a weak conjunction must agree to take each atomic step, unless one aborts in which case the whole aborts. In developing the laws for parallel and weak conjunction we found many properties were shared by the operators and that the proofs of many laws were essentially the same. This insight led to the idea of generalising synchronisation to an abstract operator with only the axioms that are shared by the parallel and weak conjunction operator, so that those two operators can be viewed as instantiations of the abstract synchronisation operator. The main differences between parallel and weak conjunction are how they combine individual atomic steps; that is left open in the axioms for the abstract operator.Comment: Extended version of a Formal Methods 2016 paper, "An algebra of synchronous atomic steps

    A transformation-driven approach to automate feedback verification results

    Get PDF
    International audienceThe integration of formal verification methods in modeling activities is a key issue to ensure the correctness of complex system design models. In this purpose, the most common approach consists in defining a translational semantics mapping the abstract syntax of the designer dedicated Domain-Specific Modeling Language (DSML) to a formal verification dedicated semantic domain in order to reuse the available powerful verification technologies. Formal verification is thus usually achieved using model transformations. However, the verification results are available in the formal domain which significantly impairs their use by the system designer which is usually not an expert of the formal technologies. In this paper, we introduce a novel approach based on Higher-Order transformations that analyze and instrument the transformation that expresses the semantics in order to produce traceability data to automatize the back propagation of verification results to the DSML end-user

    The Lambek calculus with iteration: two variants

    Full text link
    Formulae of the Lambek calculus are constructed using three binary connectives, multiplication and two divisions. We extend it using a unary connective, positive Kleene iteration. For this new operation, following its natural interpretation, we present two lines of calculi. The first one is a fragment of infinitary action logic and includes an omega-rule for introducing iteration to the antecedent. We also consider a version with infinite (but finitely branching) derivations and prove equivalence of these two versions. In Kleene algebras, this line of calculi corresponds to the *-continuous case. For the second line, we restrict our infinite derivations to cyclic (regular) ones. We show that this system is equivalent to a variant of action logic that corresponds to general residuated Kleene algebras, not necessarily *-continuous. Finally, we show that, in contrast with the case without division operations (considered by Kozen), the first system is strictly stronger than the second one. To prove this, we use a complexity argument. Namely, we show, using methods of Buszkowski and Palka, that the first system is Π10\Pi_1^0-hard, and therefore is not recursively enumerable and cannot be described by a calculus with finite derivations

    Inductive Definition and Domain Theoretic Properties of Fully Abstract

    Full text link
    A construction of fully abstract typed models for PCF and PCF^+ (i.e., PCF + "parallel conditional function"), respectively, is presented. It is based on general notions of sequential computational strategies and wittingly consistent non-deterministic strategies introduced by the author in the seventies. Although these notions of strategies are old, the definition of the fully abstract models is new, in that it is given level-by-level in the finite type hierarchy. To prove full abstraction and non-dcpo domain theoretic properties of these models, a theory of computational strategies is developed. This is also an alternative and, in a sense, an analogue to the later game strategy semantics approaches of Abramsky, Jagadeesan, and Malacaria; Hyland and Ong; and Nickau. In both cases of PCF and PCF^+ there are definable universal (surjective) functionals from numerical functions to any given type, respectively, which also makes each of these models unique up to isomorphism. Although such models are non-omega-complete and therefore not continuous in the traditional terminology, they are also proved to be sequentially complete (a weakened form of omega-completeness), "naturally" continuous (with respect to existing directed "pointwise", or "natural" lubs) and also "naturally" omega-algebraic and "naturally" bounded complete -- appropriate generalisation of the ordinary notions of domain theory to the case of non-dcpos.Comment: 50 page

    Web 2.0, language resources and standards to automatically build a multilingual named entity lexicon

    Get PDF
    This paper proposes to advance in the current state-of-the-art of automatic Language Resource (LR) building by taking into consideration three elements: (i) the knowledge available in existing LRs, (ii) the vast amount of information available from the collaborative paradigm that has emerged from the Web 2.0 and (iii) the use of standards to improve interoperability. We present a case study in which a set of LRs for different languages (WordNet for English and Spanish and Parole-Simple-Clips for Italian) are extended with Named Entities (NE) by exploiting Wikipedia and the aforementioned LRs. The practical result is a multilingual NE lexicon connected to these LRs and to two ontologies: SUMO and SIMPLE. Furthermore, the paper addresses an important problem which affects the Computational Linguistics area in the present, interoperability, by making use of the ISO LMF standard to encode this lexicon. The different steps of the procedure (mapping, disambiguation, extraction, NE identification and postprocessing) are comprehensively explained and evaluated. The resulting resource contains 974,567, 137,583 and 125,806 NEs for English, Spanish and Italian respectively. Finally, in order to check the usefulness of the constructed resource, we apply it into a state-of-the-art Question Answering system and evaluate its impact; the NE lexicon improves the system’s accuracy by 28.1%. Compared to previous approaches to build NE repositories, the current proposal represents a step forward in terms of automation, language independence, amount of NEs acquired and richness of the information represented
    corecore