46 research outputs found

    Specification and verification of context conditions for programming languages

    Get PDF
    Bibliography: p. 204-211.Context conditions - also called static semantics - are the constraints on computer programs that cannot be reasonably expressed by a context-free grammar, but that can be statically checked without considering the execution properties - semantics - of the program. Such conditions tend to be arbitrary and complex. This thesis presents a new specification formalism called CFF/AML. This formalism is · designed to be both useful for the specification of programming languages to an environment generator and also simple to use. The driving insight behind CFF/AML is that a language specifier conceives of the context condition checks associated with a programming language syntax description in procedural terms. CFF/AML supports this view of context condition specification, thus simplifying the task of the language specifier. CFF/AML has been formally by constructing a temporal proof system for the metalanguage. This proof system can also be used to verify CFF/AML specifications. The construction of the temporal proof system for CFF/AML uncovered a deficiency in the existing theory, namely that there was no way to prove subprograms, especially recursive subprograms, correct. The theory was extended to handle recursive subprograms. The approach developed in this thesis allows recursive subprograms to be proven correct using the same approach as was used previously for iterative constructs. This thesis makes a number of contributions to Computer Science. An approach to language specification - CFF/AML - is developed that greatly reduces the problems associated with building a language specification for input to a programming language environment generator. The theory of temporal proof systems is extended to include a methodology for handling proofs of recursive subprograms. A formal description of the CFF/AML metalanguage has been developed using temporal logic as the framework for the description. This is the first attempt to use temporal logic for such a task. As CFF/AML constructs can be dynamically scoped, this development differs from that required for statically scoped languages. We have also used this temporal proof system formally to prove that context condition specifications are correct. These proofs are an advancement on earlier work in the field of formal reasoning about context condition specification as they allow formal proof of the correctness of evaluations, as well as proving termination

    Formal Specification Of Design Patterns: A Comparison Of Three Existing Approaches And Proposing Two-Level Grammars As A New Approach

    Get PDF
    Patterns are Object-Oriented reusable units. The principal idea behind patterns is to capture and reuse the abstractions that have been formed by expert programmers and designers to solve problems that occur in particular contexts. These abstractions capture the valuable experiences of experts in solving problems. Although patterns are currently being used successfully, there is no general agreement among the software community as to how patterns should be formalized or represented. Various formal specification schemes have been proposed to complement the natural language description of patterns in order to alleviate the ambiguities inherent in the natural language description by rigorously reasoning about the structural and behavioral aspects of patterns. Existing formal specification languages of design patterns have generally failed to provide a standard definition, specification, or representation for patterns because there is no general agreement as to how patterns should be formalized. Also, each formal specification is generally based on a different mathematical formalism and when pattern users want to understand a pattern, first they have to understand the respective mathematical formalism. In addition to comparing three existing formal specification schemes, the main objective of this research work was to lay the foundation for developing a formal specification scheme that could be understandable without having to delve into the details of the underlying formalism. This research work attempted to capture and represent the structural aspects of design patterns since capturing the behavioral aspects of design patterns is a semantic issue and is beyond the scope of this work. Two-Level Grammar (TLG) was used to capture and represent the structural aspects of design patterns. This study was conducted using the GoF design patterns [Gamma et al. 1995]. It has already been demonstrated that TLGs have the capability to represent the building blocks of object-oriented software systems. The primary advantage of TLGs in defining design patterns is that specifications written in TLGs are understandable due to their natural-language-like vocabulary [Edupuganty 1987] [Lee 2003] [Maluszynski 1984]. The TLG representation of the observer pattern was developed to gauge the feasibility of the proposed pattern representation scheme. TLGs could help pattern users understand the formalized version of patterns more readily compared to other formal specification methods that are difficult to understand due to their arcane mathematical notations.Computer Science Departmen

    The Liar Paradox: A Consistent and Semantically Closed Solution

    No full text
    This thesis develops a new approach to the formal de nition of a truth predicate that allows a consistent, semantically closed defiition within classical logic. The approach is built on an analysis of structural properties of languages that make Liar Sentences and the paradoxical argument possible. By focusing on these conditions, standard formal dfinitions of semantics are shown to impose systematic limitations on the definition of formal truth predicates. The alternative approach to the formal definition of truth is developed by analysing our intuitive procedure for evaluating the truth value of sentences like "P is true". It is observed that the standard procedure breaks down in the case of the Liar Paradox as a side effect of the patterns of naming or reference necessary to the definition of Truth as a predicate. This means there are two ways in which a sentence like "P is true" can be not true, which requires that the T-Schema be modified for such sentences. By modifying the T-Schema, and taking seriously the effects of the patterns of naming/ reference on truth values, the new approach to the definition of truth is developed. Formal truth definitions within classical logic are constructed that provide an explicit and adequate truth definition for their own language, every sentence within the languages has a truth value, and there is no Strengthened Liar Paradox. This approach to solving the Liar Paradox can be easily applied to a very wide range of languages, including natural languages

    Maia and Mandos: Tools for Integrity Protection on Arbitrary Files

    Get PDF
    We present the results of our dissertation research, which focuses on practical means of protecting system data integrity. In particular, we present Maia, a language for describing integrity constraints on arbitrary file types, and Mandos, a Linux Security Module which uses verify-on-close to enforce mandatory integrity guarantees. We also provide details of a Maia-based verifier generator, demonstrate that Maia and Mandos introduce minimal delay in performing their tasks, and include a selection of sample Maia specifications

    Vector Semantics

    Get PDF
    This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics. The computational linguists and deep learning researchers who developed word vectors have relied primarily on the ever-increasing availability of large corpora and of computers with highly parallel GPU and TPU compute engines, and their focus is with endowing computers with natural language capabilities for practical applications such as machine translation or question answering. Cognitive linguists investigate natural language from the perspective of human cognition, the relation between language and thought, and questions about conceptual universals, relying primarily on in-depth investigation of language in use. In spite of the fact that these two schools both have ‘linguistics’ in their name, so far there has been very limited communication between them, as their historical origins, data collection methods, and conceptual apparatuses are quite different. Vector semantics bridges the gap by presenting a formal theory, cast in terms of linear polytopes, that generalizes both word vectors and conceptual structures, by treating each dictionary definition as an equation, and the entire lexicon as a set of equations mutually constraining all meanings

    Decidability of Conversion for Type Theory in Type Theory

    Get PDF
    Type theory should be able to handle its own meta-theory, both to justify its foundational claims and to obtain a verified implementation. At the core of a type checker for intensional type theory lies an algorithm to check equality of types, or in other words, to check whether two types are convertible. We have formalized in Agda a practical conversion checking algorithm for a dependent type theory with one universe \ue0 la Russell, natural numbers, and η-equality for Π types. We prove the algorithm correct via a Kripke logical relation parameterized by a suitable notion of equivalence of terms. We then instantiate the parameterized fundamental lemma twice: once to obtain canonicity and injectivity of type formers, and once again to prove the completeness of the algorithm. Our proof relies on inductive-recursive definitions, but not on the uniqueness of identity proofs. Thus, it is valid in variants of intensional Martin-L\uf6f Type Theory as long as they support induction-recursion, for instance, Extensional, Observational, or Homotopy Type Theory

    Vector Semantics

    Get PDF
    This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics. The computational linguists and deep learning researchers who developed word vectors have relied primarily on the ever-increasing availability of large corpora and of computers with highly parallel GPU and TPU compute engines, and their focus is with endowing computers with natural language capabilities for practical applications such as machine translation or question answering. Cognitive linguists investigate natural language from the perspective of human cognition, the relation between language and thought, and questions about conceptual universals, relying primarily on in-depth investigation of language in use. In spite of the fact that these two schools both have ‘linguistics’ in their name, so far there has been very limited communication between them, as their historical origins, data collection methods, and conceptual apparatuses are quite different. Vector semantics bridges the gap by presenting a formal theory, cast in terms of linear polytopes, that generalizes both word vectors and conceptual structures, by treating each dictionary definition as an equation, and the entire lexicon as a set of equations mutually constraining all meanings
    corecore