17,132 research outputs found

    Generalized Strong Preservation by Abstract Interpretation

    Full text link
    Standard abstract model checking relies on abstract Kripke structures which approximate concrete models by gluing together indistinguishable states, namely by a partition of the concrete state space. Strong preservation for a specification language L encodes the equivalence of concrete and abstract model checking of formulas in L. We show how abstract interpretation can be used to design abstract models that are more general than abstract Kripke structures. Accordingly, strong preservation is generalized to abstract interpretation-based models and precisely related to the concept of completeness in abstract interpretation. The problem of minimally refining an abstract model in order to make it strongly preserving for some language L can be formulated as a minimal domain refinement in abstract interpretation in order to get completeness w.r.t. the logical/temporal operators of L. It turns out that this refined strongly preserving abstract model always exists and can be characterized as a greatest fixed point. As a consequence, some well-known behavioural equivalences, like bisimulation, simulation and stuttering, and their corresponding partition refinement algorithms can be elegantly characterized in abstract interpretation as completeness properties and refinements

    Logic Meets Algebra: the Case of Regular Languages

    Full text link
    The study of finite automata and regular languages is a privileged meeting point of algebra and logic. Since the work of Buchi, regular languages have been classified according to their descriptive complexity, i.e. the type of logical formalism required to define them. The algebraic point of view on automata is an essential complement of this classification: by providing alternative, algebraic characterizations for the classes, it often yields the only opportunity for the design of algorithms that decide expressibility in some logical fragment. We survey the existing results relating the expressibility of regular languages in logical fragments of MSO[S] with algebraic properties of their minimal automata. In particular, we show that many of the best known results in this area share the same underlying mechanics and rely on a very strong relation between logical substitutions and block-products of pseudovarieties of monoid. We also explain the impact of these connections on circuit complexity theory.Comment: 37 page

    Feature-based and Model-based Semantics for English, French and German Verb Phrases

    Get PDF
    This paper considers the relative merits of using features and formal event models to characterise the semantics of English, French and German verb phrases, and con- siders the application of such semantics in machine translation. The feature-based ap- proach represents the semantics in terms of feature systems, which have been widely used in computational linguistics for representing complex syntactic structures. The paper shows how a simple intuitive semantics of verb phrases may be encoded as a feature system, and how this can be used to support modular construction of au- tomatic translation systems through feature look-up tables. This is illustrated by automated translation of English into either French or German. The paper contin- ues to formalise the feature-based approach via a model-based, Montague semantics, which extends previous work on the semantics of English verb phrases. In so doing, repercussions of and to this framework in conducting a contrastive semantic study are considered. The model-based approach also promises to provide support for a more sophisticated approach to translation through logical proof; the paper indicates further work required for the fulfilment of this promise

    Generalizing the Paige-Tarjan Algorithm by Abstract Interpretation

    Full text link
    The Paige and Tarjan algorithm (PT) for computing the coarsest refinement of a state partition which is a bisimulation on some Kripke structure is well known. It is also well known in model checking that bisimulation is equivalent to strong preservation of CTL, or, equivalently, of Hennessy-Milner logic. Drawing on these observations, we analyze the basic steps of the PT algorithm from an abstract interpretation perspective, which allows us to reason on strong preservation in the context of generic inductively defined (temporal) languages and of possibly non-partitioning abstract models specified by abstract interpretation. This leads us to design a generalized Paige-Tarjan algorithm, called GPT, for computing the minimal refinement of an abstract interpretation-based model that strongly preserves some given language. It turns out that PT is a straight instance of GPT on the domain of state partitions for the case of strong preservation of Hennessy-Milner logic. We provide a number of examples showing that GPT is of general use. We first show how a well-known efficient algorithm for computing stuttering equivalence can be viewed as a simple instance of GPT. We then instantiate GPT in order to design a new efficient algorithm for computing simulation equivalence that is competitive with the best available algorithms. Finally, we show how GPT allows to compute new strongly preserving abstract models by providing an efficient algorithm that computes the coarsest refinement of a given partition that strongly preserves the language generated by the reachability operator.Comment: Keywords: Abstract interpretation, abstract model checking, strong preservation, Paige-Tarjan algorithm, refinement algorith

    Real-time and Probabilistic Temporal Logics: An Overview

    Full text link
    Over the last two decades, there has been an extensive study on logical formalisms for specifying and verifying real-time systems. Temporal logics have been an important research subject within this direction. Although numerous logics have been introduced for the formal specification of real-time and complex systems, an up to date comprehensive analysis of these logics does not exist in the literature. In this paper we analyse real-time and probabilistic temporal logics which have been widely used in this field. We extrapolate the notions of decidability, axiomatizability, expressiveness, model checking, etc. for each logic analysed. We also provide a comparison of features of the temporal logics discussed

    Changing a semantics: opportunism or courage?

    Full text link
    The generalized models for higher-order logics introduced by Leon Henkin, and their multiple offspring over the years, have become a standard tool in many areas of logic. Even so, discussion has persisted about their technical status, and perhaps even their conceptual legitimacy. This paper gives a systematic view of generalized model techniques, discusses what they mean in mathematical and philosophical terms, and presents a few technical themes and results about their role in algebraic representation, calibrating provability, lowering complexity, understanding fixed-point logics, and achieving set-theoretic absoluteness. We also show how thinking about Henkin's approach to semantics of logical systems in this generality can yield new results, dispelling the impression of adhocness. This paper is dedicated to Leon Henkin, a deep logician who has changed the way we all work, while also being an always open, modest, and encouraging colleague and friend.Comment: 27 pages. To appear in: The life and work of Leon Henkin: Essays on his contributions (Studies in Universal Logic) eds: Manzano, M., Sain, I. and Alonso, E., 201

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
    • …
    corecore