106 research outputs found
Formalizing, Verifying and Applying ISA Security Guarantees as Universal Contracts
Progress has recently been made on specifying instruction set architectures
(ISAs) in executable formalisms rather than through prose. However, to date,
those formal specifications are limited to the functional aspects of the ISA
and do not cover its security guarantees. We present a novel, general method
for formally specifying an ISAs security guarantees to (1) balance the needs of
ISA implementations (hardware) and clients (software), (2) can be
semi-automatically verified to hold for the ISA operational semantics,
producing a high-assurance mechanically-verifiable proof, and (3) support
informal and formal reasoning about security-critical software in the presence
of adversarial code. Our method leverages universal contracts: software
contracts that express bounds on the authority of arbitrary untrusted code.
Universal contracts can be kept agnostic of software abstractions, and strike
the right balance between requiring sufficient detail for reasoning about
software and preserving implementation freedom of ISA designers and CPU
implementers. We semi-automatically verify universal contracts against Sail
implementations of ISA semantics using our Katamaran tool; a semi-automatic
separation logic verifier for Sail which produces machine-checked proofs for
successfully verified contracts. We demonstrate the generality of our method by
applying it to two ISAs that offer very different security primitives: (1)
MinimalCaps: a custom-built capability machine ISA and (2) a (somewhat
simplified) version of RISC-V with PMP. We verify a femtokernel using the
security guarantee we have formalized for RISC-V with PMP
Bootstrapping extensionality
Intuitionistic type theory is a formal system designed by Per Martin-Loef to be a full-fledged foundation in which to develop constructive mathematics. One particular variant, intensional type theory (ITT), features nice computational properties like decidable type-checking, making it especially suitable for computer implementation. However, as traditionally defined, ITT lacks many vital extensionality principles, such as function extensionality. We would like to extend ITT with the desired extensionality principles while retaining its convenient computational behaviour. To do so, we must first understand the extent of its expressive power, from its strengths to its limitations.
The contents of this thesis are an investigation into intensional type theory, and in particular into its power to express extensional concepts. We begin, in the first part, by developing an extension to the strict setoid model of type theory with a universe of setoids. The model construction is carried out in a minimal intensional type theoretic metatheory, thus providing a way to bootstrap extensionality by ``compiling'' it down to a few building blocks such as inductive families and proof-irrelevance.
In the second part of the thesis we explore inductive-inductive types (ITTs) and their relation to simpler forms of induction in an intensional setting. We develop a general method to reduce a subclass of infinitary IITs to inductive families, via an encoding that can be expressed in ITT without any extensionality besides proof-irrelevance. Our results contribute to further understand IITs and the expressive power of intensional type theory, and can be of practical use when formalizing mathematics in proof assistants that do not natively support induction-induction
Erasure in dependently typed programming
It is important to reduce the cost of correctness in programming. Dependent types
and related techniques, such as type-driven programming, offer ways to do so.
Some parts of dependently typed programs constitute evidence of their typecorrectness
and, once checked, are unnecessary for execution. These parts can easily
become asymptotically larger than the remaining runtime-useful computation, which
can cause linear-time algorithms run in exponential time, or worse. It would be
unnacceptable, and contradict our goal of reducing the cost of correctness, to make
programs run slower by only describing them more precisely.
Current systems cannot erase such computation satisfactorily. By modelling
erasure indirectly through type universes or irrelevance, they impose the limitations
of these means to erasure. Some useless computation then cannot be erased and
idiomatic programs remain asymptotically sub-optimal.
This dissertation explains why we need erasure, that it is different from other
concepts like irrelevance, and proposes two ways of erasing non-computational data.
One is an untyped flow-based useless variable elimination, adapted for dependently
typed languages, currently implemented in the Idris 1 compiler.
The other is the main contribution of the dissertation: a dependently typed core
calculus with erasure annotations, full dependent pattern matching, and an algorithm
that infers erasure annotations from unannotated (or partially annotated) programs.
I show that erasure in well-typed programs is sound in that it commutes with
single-step reduction. Assuming the Church-Rosser property of reduction, I show
that properties such as Subject Reduction hold, which extends the soundness result
to multi-step reduction. I also show that the presented erasure inference is sound
and complete with respect to the typing rules; that this approach can be extended
with various forms of erasure polymorphism; that it works well with monadic I/O
and foreign functions; and that it is effective in that it not only removes the runtime
overhead caused by dependent typing in the presented examples, but can also shorten
compilation times."This work was supported by the University of St Andrews (School of Computer
Science)." -- Acknowledgement
Modular Collaborative Program Analysis
With our world increasingly relying on computers, it is important to ensure the quality, correctness, security, and performance of software systems. Static analysis that computes properties of computer programs without executing them has been an important method to achieve this for decades. However, static analysis faces major chal-
lenges in increasingly complex programming languages and software systems and increasing and sometimes conflicting demands for soundness, precision, and scalability. In order to cope with these challenges, it is necessary to build static analyses for complex problems from small, independent, yet collaborating modules that can be developed in isolation and combined in a plug-and-play manner.
So far, no generic architecture to implement and combine a broad range of dissimilar static analyses exists. The goal of this thesis is thus to design such an architecture and implement it as a generic framework for developing modular, collaborative static analyses. We use several, diverse case-study analyses from which we systematically derive requirements to guide the design of the framework. Based on this, we propose the use of a blackboard-architecture style collaboration of analyses that we implement in the OPAL framework. We also develop a formal model of our architectures core concepts and show how it enables freely composing analyses while retaining their soundness guarantees.
We showcase and evaluate our architecture using the case-study analyses, each of which shows how important and complex problems of static analysis can be addressed using a modular, collaborative implementation style. In particular, we show how a modular architecture for the construction of call graphs ensures consistent soundness of different algorithms. We show how modular analyses for different aspects of immutability mutually benefit each other. Finally, we show how the analysis of method purity can benefit from the use of other complex analyses in a collaborative manner and from exchanging different analysis implementations that exhibit different characteristics. Each of these case studies improves over the respective state of the art in terms of soundness, precision, and/or scalability and shows how our architecture enables experimenting with and fine-tuning trade-offs between these qualities
Taming complexity of industrial printing systems using a constraint-based DSL: An industrial experience report
Flexible printing systems are highly complex systems that consist of printers, that print individual sheets of paper, and finishing equipment, that processes sheets after printing, for example, assembling a book. Integrating finishing equipment with printers involves the development of control software that configures the devices, taking hardware constraints into account. This control software is highly complex to realize due to (1) the intertwined nature of printing and finishing, (2) the large variety of print products and production options for a given product, and (3) the large range of finishers produced by different vendors. We have developed a domain-specific language called CSX that offers an interface to constraint solving specific to the printing domain. We use it to model printing and finishing devices and to automatically derive constraint solver-based environments for automatic configuration. We evaluate CSX on its coverage of the printing domain in an industrial context, and we report on lessons learned on using a constraint-based DSL in an industrial context
Foundations for programming and implementing effect handlers
First-class control operators provide programmers with an expressive and efficient
means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and
control idioms as shareable libraries. Effect handlers provide a particularly structured
approach to programming with first-class control by naming control reifying operations
and separating from their handling.
This thesis is composed of three strands of work in which I develop operational
foundations for programming and implementing effect handlers as well as exploring
the expressive power of effect handlers.
The first strand develops a fine-grain call-by-value core calculus of a statically
typed programming language with a structural notion of effect types, as opposed to the
nominal notion of effect types that dominates the literature. With the structural approach,
effects need not be declared before use. The usual safety properties of statically typed
programming are retained by making crucial use of row polymorphism to build and
track effect signatures. The calculus features three forms of handlers: deep, shallow,
and parameterised. They each offer a different approach to manipulate the control state
of programs. Traditional deep handlers are defined by folds over computation trees,
and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are
defined by case splits (rather than folds) over computation trees. Parameterised handlers
are deep handlers extended with a state value that is threaded through the folds over
computation trees. To demonstrate the usefulness of effects and handlers as a practical
programming abstraction I implement the essence of a small UNIX-style operating
system complete with multi-user environment, time-sharing, and file I/O.
The second strand studies continuation passing style (CPS) and abstract machine
semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The
CPS translation is obtained through a series of refinements of a basic first-order CPS
translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually
arriving at the notion of generalised continuation, which admit simultaneous support for
deep, shallow, and parameterised handlers. The initial refinement adds support for deep
handlers by representing stacks of continuations and handlers as a curried sequence of
arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the
CPS translation is refined once more to obtain an uncurried representation of stacks
of continuations and handlers. Finally, the translation is made higher-order in order to
contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for
deep, shallow, and parameterised effect handlers. kinds of effect handlers.
The third strand explores the expressiveness of effect handlers. First, I show that
deep, shallow, and parameterised notions of handlers are interdefinable by way of typed
macro-expressiveness, which provides a syntactic notion of expressiveness that affirms
the existence of encodings between handlers, but it provides no information about the
computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class
control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control
Modular pre-processing for automated reasoning in dependent type theory
The power of modern automated theorem provers can be put at the service of
interactive theorem proving. But this requires in particular bridging the
expressivity gap between the logics these provers are respectively based on.
This paper presents the implementation of a modular suite of pre-processing
transformations, which incrementally bring certain formulas expressed in the
Calculus of Inductive Constructions closer to the first-order logic of
Satifiability Modulo Theory solvers. These transformations address issues
related to the axiomatization of inductive types, to polymorphic definitions or
to the different implementations of a same theory signature. This suite is
implemented as a plugin for the Coq proof assistant, and integrated to the
SMTCoq toolchain
- …