2,432 research outputs found
Cyclic proof systems for modal fixpoint logics
This thesis is about cyclic and ill-founded proof systems for modal fixpoint logics, with and without explicit fixpoint quantifiers.Cyclic and ill-founded proof-theory allow proofs with infinite branches or paths, as long as they satisfy some correctness conditions ensuring the validity of the conclusion. In this dissertation we design a few cyclic and ill-founded systems: a cyclic one for the weak Grzegorczyk modal logic K4Grz, based on our explanation of the phenomenon of cyclic companionship; and ill-founded and cyclic ones for the full computation tree logic CTL* and the intuitionistic linear-time temporal logic iLTL. All systems are cut-free, and the cyclic ones for K4Grz and iLTL have fully finitary correctness conditions.Lastly, we use a cyclic system for the modal mu-calculus to obtain a proof of the uniform interpolation property for the logic which differs from the original, automata-based one
Probabilistic Programming Interfaces for Random Graphs::Markov Categories, Graphons, and Nominal Sets
We study semantic models of probabilistic programming languages over graphs, and establish a connection to graphons from graph theory and combinatorics. We show that every well-behaved equational theory for our graph probabilistic programming language corresponds to a graphon, and conversely, every graphon arises in this way.We provide three constructions for showing that every graphon arises from an equational theory. The first is an abstract construction, using Markov categories and monoidal indeterminates. The second and third are more concrete. The second is in terms of traditional measure theoretic probability, which covers 'black-and-white' graphons. The third is in terms of probability monads on the nominal sets of Gabbay and Pitts. Specifically, we use a variation of nominal sets induced by the theory of graphs, which covers Erdős-Rényi graphons. In this way, we build new models of graph probabilistic programming from graphons
Investigations into Proof Structures
We introduce and elaborate a novel formalism for the manipulation and
analysis of proofs as objects in a global manner. In this first approach the
formalism is restricted to first-order problems characterized by condensed
detachment. It is applied in an exemplary manner to a coherent and
comprehensive formal reconstruction and analysis of historical proofs of a
widely-studied problem due to {\L}ukasiewicz. The underlying approach opens the
door towards new systematic ways of generating lemmas in the course of proof
search to the effects of reducing the search effort and finding shorter proofs.
Among the numerous reported experiments along this line, a proof of
{\L}ukasiewicz's problem was automatically discovered that is much shorter than
any proof found before by man or machine.Comment: This article is a continuation of arXiv:2104.1364
Bridging Causal Reversibility and Time Reversibility: A Stochastic Process Algebraic Approach
Causal reversibility blends reversibility and causality for concurrent
systems. It indicates that an action can be undone provided that all of its
consequences have been undone already, thus making it possible to bring the
system back to a past consistent state. Time reversibility is instead
considered in the field of stochastic processes, mostly for efficient analysis
purposes. A performance model based on a continuous-time Markov chain is time
reversible if its stochastic behavior remains the same when the direction of
time is reversed. We bridge these two theories of reversibility by showing the
conditions under which causal reversibility and time reversibility are both
ensured by construction. This is done in the setting of a stochastic process
calculus, which is then equipped with a variant of stochastic bisimilarity
accounting for both forward and backward directions
An Infinite Needle in a Finite Haystack: Finding Infinite Counter-Models in Deductive Verification
First-order logic, and quantifiers in particular, are widely used in
deductive verification. Quantifiers are essential for describing systems with
unbounded domains, but prove difficult for automated solvers. Significant
effort has been dedicated to finding quantifier instantiations that establish
unsatisfiability, thus ensuring validity of a system's verification conditions.
However, in many cases the formulas are satisfiable: this is often the case in
intermediate steps of the verification process. For such cases, existing tools
are limited to finding finite models as counterexamples. Yet, some quantified
formulas are satisfiable but only have infinite models. Such infinite
counter-models are especially typical when first-order logic is used to
approximate inductive definitions such as linked lists or the natural numbers.
The inability of solvers to find infinite models makes them diverge in these
cases. In this paper, we tackle the problem of finding such infinite models.
These models allow the user to identify and fix bugs in the modeling of the
system and its properties. Our approach consists of three parts. First, we
introduce symbolic structures as a way to represent certain infinite models.
Second, we describe an effective model finding procedure that symbolically
explores a given family of symbolic structures. Finally, we identify a new
decidable fragment of first-order logic that extends and subsumes the
many-sorted variant of EPR, where satisfiable formulas always have a model
representable by a symbolic structure within a known family. We evaluate our
approach on examples from the domains of distributed consensus protocols and of
heap-manipulating programs. Our implementation quickly finds infinite
counter-models that demonstrate the source of verification failures in a simple
way, while SMT solvers and theorem provers such as Z3, cvc5, and Vampire
diverge
Bootstrapping extensionality
Intuitionistic type theory is a formal system designed by Per Martin-Loef to be a full-fledged foundation in which to develop constructive mathematics. One particular variant, intensional type theory (ITT), features nice computational properties like decidable type-checking, making it especially suitable for computer implementation. However, as traditionally defined, ITT lacks many vital extensionality principles, such as function extensionality. We would like to extend ITT with the desired extensionality principles while retaining its convenient computational behaviour. To do so, we must first understand the extent of its expressive power, from its strengths to its limitations.
The contents of this thesis are an investigation into intensional type theory, and in particular into its power to express extensional concepts. We begin, in the first part, by developing an extension to the strict setoid model of type theory with a universe of setoids. The model construction is carried out in a minimal intensional type theoretic metatheory, thus providing a way to bootstrap extensionality by ``compiling'' it down to a few building blocks such as inductive families and proof-irrelevance.
In the second part of the thesis we explore inductive-inductive types (ITTs) and their relation to simpler forms of induction in an intensional setting. We develop a general method to reduce a subclass of infinitary IITs to inductive families, via an encoding that can be expressed in ITT without any extensionality besides proof-irrelevance. Our results contribute to further understand IITs and the expressive power of intensional type theory, and can be of practical use when formalizing mathematics in proof assistants that do not natively support induction-induction
Towards A Practical High-Assurance Systems Programming Language
Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation.
Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code.
To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process
Revisiting Language Support for Generic Programming: When Genericity Is a Core Design Goal
ContextGeneric programming, as defined by Stepanov, is a methodology for writing efficient and reusable algorithms by considering only the required properties of their underlying data types and operations. Generic programming has proven to be an effective means of constructing libraries of reusable software components in languages that support it. Generics-related language design choices play a major role in how conducive generic programming is in practice.InquirySeveral mainstream programming languages (e.g. Java and C++) were first created without generics; features to support generic programming were added later, gradually. Much of the existing literature on supporting generic programming focuses thus on retrofitting generic programming into existing languages and identifying related implementation challenges. Is the programming experience significantly better, or different when programming with a language designed for generic programming without limitations from prior language design choices?ApproachWe examine Magnolia, a language designed to embody generic programming. Magnolia is representative of an approach to language design rooted in algebraic specifications. We repeat a well-known experiment, where we put Magnolia’s generic programming facilities under scrutiny by implementing a subset of the Boost Graph Library, and reflect on our development experience.KnowledgeWe discover that the idioms identified as key features for supporting Stepanov-style generic programming in the previous studies and work on the topic do not tell a full story. We clarify which of them are more of a means to an end, rather than fundamental features for supporting generic programming. Based on the development experience with Magnolia, we identify variadics as an additional key feature for generic programming and point out limitations and challenges of genericity by property.GroundingOur work uses a well-known framework for evaluating the generic programming facilities of a language from the literature to evaluate the algebraic approach through Magnolia, and we draw comparisons with well-known programming languages.ImportanceThis work gives a fresh perspective on generic programming, and clarifies what are fundamental language properties and their trade-offs when considering supporting Stepanov-style generic programming. The understanding of how to set the ground for generic programming will inform future language design.</p
- …