1,925 research outputs found

    A foundation for synthesising programming language semantics

    Get PDF
    Programming or scripting languages used in real-world systems are seldom designed with a formal semantics in mind from the outset. Therefore, the first step for developing well-founded analysis tools for these systems is to reverse-engineer a formal semantics. This can take months or years of effort. Could we automate this process, at least partially? Though desirable, automatically reverse-engineering semantics rules from an implementation is very challenging, as found by Krishnamurthi, Lerner and Elberty. They propose automatically learning desugaring translation rules, mapping the language whose semantics we seek to a simplified, core version, whose semantics are much easier to write. The present thesis contains an analysis of their challenge, as well as the first steps towards a solution. Scaling methods with the size of the language is very difficult due to state space explosion, so this thesis proposes an incremental approach to learning the translation rules. I present a formalisation that both clarifies the informal description of the challenge by Krishnamurthi et al, and re-formulates the problem, shifting the focus to the conditions for incremental learning. The central definition of the new formalisation is the desugaring extension problem, i.e. extending a set of established translation rules by synthesising new ones. In a synthesis algorithm, the choice of search space is important and non-trivial, as it needs to strike a good balance between expressiveness and efficiency. The rest of the thesis focuses on defining search spaces for translation rules via typing rules. Two prerequisites are required for comparing search spaces. The first is a series of benchmarks, a set of source and target languages equipped with intended translation rules between them. The second is an enumerative synthesis algorithm for efficiently enumerating typed programs. I show how algebraic enumeration techniques can be applied to enumerating well-typed translation rules, and discuss the properties expected from a type system for ensuring that typed programs be efficiently enumerable. The thesis presents and empirically evaluates two search spaces. A baseline search space yields the first practical solution to the challenge. The second search space is based on a natural heuristic for translation rules, limiting the usage of variables so that they are used exactly once. I present a linear type system designed to efficiently enumerate translation rules, where this heuristic is enforced. Through informal analysis and empirical comparison to the baseline, I then show that using linear types can speed up the synthesis of translation rules by an order of magnitude

    Fragments and frame classes:Towards a uniform proof theory for modal fixed point logics

    Get PDF
    This thesis studies the proof theory of modal fixed point logics. In particular, we construct proof systems for various fragments of the modal mu-calculus, interpreted over various classes of frames. With an emphasis on uniform constructions and general results, we aim to bring the relatively underdeveloped proof theory of modal fixed point logics closer to the well-established proof theory of basic modal logic. We employ two main approaches. First, we seek to generalise existing methods for basic modal logic to accommodate fragments of the modal mu-calculus. We use this approach for obtaining Hilbert-style proof systems. Secondly, we adapt existing proof systems for the modal mu-calculus to various classes of frames. This approach yields proof systems which are non-well-founded, or cyclic.The thesis starts with an introduction and some mathematical preliminaries. In Chapter 3 we give hypersequent calculi for modal logic with the master modality, building on work by Ori Lahav. This is followed by an Intermezzo, where we present an abstract framework for cyclic proofs, in which we give sufficient conditions for establishing the bounded proof property. In Chapter 4 we generalise existing work on Hilbert-style proof systems for PDL to the level of the continuous modal mu-calculus. Chapter 5 contains a novel cyclic proof system for the alternation-free two-way modal mu-calculus. Finally, in Chapter 6, we present a cyclic proof system for Guarded Kleene Algebra with Tests and take a first step towards using it to establish the completeness of an algebraic counterpart

    Summaries in english

    Get PDF

    Dinaturality Meets Genericity: A Game Semantics of Bounded Polymorphism

    Get PDF
    We study subtyping and parametric polymorphism, with the aim of providing direct and tractable semantic representations of type systems with these expressive features. The liveness order uses the Player-Opponent duality of game semantics to give a simple representation of subtyping: we generalize it to include graphs extracted directly from second-order intuitionistic types, and use the resulting complete lattice to interpret bounded polymorphic types in the style of System F_<:, but with a more tractable subtyping relation. To extend this to a semantics of terms, we use the type-derived graphs as arenas, on which strategies correspond to dinatural transformations with respect to the canonical coercions ("on the nose" copycats) induced by the liveness ordering. This relationship between the interpretation of generic and subtype polymorphism thus provides the basis of the semantics of our type system

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process

    Erasure in dependently typed programming

    Get PDF
    It is important to reduce the cost of correctness in programming. Dependent types and related techniques, such as type-driven programming, offer ways to do so. Some parts of dependently typed programs constitute evidence of their typecorrectness and, once checked, are unnecessary for execution. These parts can easily become asymptotically larger than the remaining runtime-useful computation, which can cause linear-time algorithms run in exponential time, or worse. It would be unnacceptable, and contradict our goal of reducing the cost of correctness, to make programs run slower by only describing them more precisely. Current systems cannot erase such computation satisfactorily. By modelling erasure indirectly through type universes or irrelevance, they impose the limitations of these means to erasure. Some useless computation then cannot be erased and idiomatic programs remain asymptotically sub-optimal. This dissertation explains why we need erasure, that it is different from other concepts like irrelevance, and proposes two ways of erasing non-computational data. One is an untyped flow-based useless variable elimination, adapted for dependently typed languages, currently implemented in the Idris 1 compiler. The other is the main contribution of the dissertation: a dependently typed core calculus with erasure annotations, full dependent pattern matching, and an algorithm that infers erasure annotations from unannotated (or partially annotated) programs. I show that erasure in well-typed programs is sound in that it commutes with single-step reduction. Assuming the Church-Rosser property of reduction, I show that properties such as Subject Reduction hold, which extends the soundness result to multi-step reduction. I also show that the presented erasure inference is sound and complete with respect to the typing rules; that this approach can be extended with various forms of erasure polymorphism; that it works well with monadic I/O and foreign functions; and that it is effective in that it not only removes the runtime overhead caused by dependent typing in the presented examples, but can also shorten compilation times."This work was supported by the University of St Andrews (School of Computer Science)." -- Acknowledgement

    Política lingüística en Luxemburgo y en la Comunidad germanófona de Bélgica: Ideologías lingüísticas

    Get PDF
    The language policy discourses of Luxembourg and the German-speaking Community of Belgium (GC) exhibit fundamental differences, yet interesting similarities that so far have not been subject to a discourse analysis from a mixed framework of linguistic anthropology and discourse linguistics (Diskurslinguistik). On the basis of a corpus consisting of current language policy texts and semi-structured interviews with key actors involved in current policy design and implementation, this research aims to answer the question regarding the interplay of ideology and discourse in the design and implementation of the language policy of Luxembourg and the GC. The bulk of the analysis is made up of three layers for each case. Starting point of the analysis is a historical overview that identifies ideologies and language policy discourses that emerged, predominated, and transformed from the 19th century until the 21st century in each case. The second layer is a discourse analysis of current language policy texts with a focus on the ideologies informing current discourses about Luxembourgish in Luxembourg and German in the GC. Finally, the third layer is a discourse analysis of interview extracts with equal focus on ideologies. Through a combined thematic and discourse analysis based on the social semiotics of language, this research provides a description of the discursive patterns of the linguistic structure of passages of each text and interview with the aim of linking these patterns to the identified ideologies that inform the policy discourses. It was found that the connecting node between Luxembourg and the GC lies in the tension between the two themes of standardization and multilingualism. It is shown that standardization and multilingualism are thematic centers from which discourses about language, identity, and nation emanate in these two cases. Through the combination of the historical overview and the meticulous analysis of discursive patterns identified in the linguistic structure of language policy texts and interview extracts, it is not only shown how ideology informs current language policy discourses in Luxembourg and the GC, but also why language policy discourses transform or sediment through time

    On marked declaratives, exclamatives, and discourse particles in Castilian Spanish

    Get PDF
    This book provides a new perspective on prosodically marked declaratives, wh-exclamatives, and discourse particles in the Madrid variety of Spanish. It argues that some marked forms differ from unmarked forms in that they encode modal evaluations of the at-issue meaning. Two epistemic evaluations that can be shown to be encoded by intonation in Spanish are linguistically encoded surprise, or mirativity, and obviousness. An empirical investigation via an audio-enhanced production experiment finds that mirativity and obviousness are associated with distinct intonational features under constant focus scope, with stances of (dis)agreement showing an impact on obvious declaratives. Wh-exclamatives are found not to differ significantly in intonational marking from neutral declaratives, showing that they need not be miratives. Moreover, we find that intonational marking on different discourse particles in natural dialogue correlates with their meaning contribution without being fully determined by it. In part, these findings quantitatively confirm previous qualitative findings on the meaning of intonational configurations in Madrid Spanish. But they also add new insights on the role intonation plays in the negotiation of commitments and expectations between interlocutors
    corecore