1,146 research outputs found
Verifying That a Compiler Preserves Concurrent Value-Dependent Information-Flow Security
It is common to prove by reasoning over source code that programs do not leak sensitive data. But doing so leaves a gap between reasoning and reality that can only be filled by accounting for the behaviour of the compiler. This task is complicated when programs enforce value-dependent information-flow security properties (in which classification of locations can vary depending on values in other locations) and complicated further when programs exploit shared-variable concurrency.
Prior work has formally defined a notion of concurrency-aware refinement for preserving value-dependent security properties. However, that notion is considerably more complex than standard refinement definitions typically applied in the verification of semantics preservation by compilers. To date it remains unclear whether it can be applied to a realistic compiler, because there exist no general decomposition principles for separating it into smaller, more familiar, proof obligations.
In this work, we provide such a decomposition principle, which we show can almost halve the complexity of proving secure refinement. Further, we demonstrate its applicability to secure compilation, by proving in Isabelle/HOL the preservation of value-dependent security by a proof-of-concept compiler from an imperative While language to a generic RISC-style assembly language, for programs with shared-memory concurrency mediated by locking primitives. Finally, we execute our compiler in Isabelle on a While language model of the Cross Domain Desktop Compositor, demonstrating to our knowledge the first use of a compiler verification result to carry an information-flow security property down to the assembly-level model of a non-trivial concurrent program
Secure Compilation (Dagstuhl Seminar 18201)
Secure compilation is an emerging field that puts together advances in
security, programming languages, verification, systems, and hardware
architectures in order to devise secure compilation chains that
eliminate many of today\u27s vulnerabilities.
Secure compilation aims to protect a source language\u27s abstractions in
compiled code, even against low-level attacks.
For a concrete example, all modern languages provide a notion of
structured control flow and an invoked procedure is expected to return
to the right place.
However, today\u27s compilation chains (compilers, linkers, loaders,
runtime systems, hardware) cannot efficiently enforce this
abstraction: linked low-level code can call and return to arbitrary
instructions or smash the stack, blatantly violating the high-level
abstraction.
The emerging secure compilation community aims to address such
problems by devising formal security criteria, efficient enforcement
mechanisms, and effective proof techniques.
This seminar strived to take a broad and inclusive view of secure
compilation and to provide a forum for discussion on the topic. The
goal was to identify interesting research directions and open
challenges by bringing together people working on building secure
compilation chains, on developing proof techniques and verification
tools, and on designing security mechanisms
Analysing Java's safety guarantees under concurrency
Two features distinguish Java from other main-stream programming languages like C and C++: its built-in support for concurrency and safety guarantees such as type safety or safe execution in a sandbox. In this work, we build a formal, unified model of Java concurrency, validate it empirically, and analyse it with respect to the safety guarantees using a proof assistant. We show that type safety and Java's data race freedom guarantee hold. Our analysis, however, revealed a weakness in the Java security architecture, because the Java memory model theoretically allows pointer forgery. As a result, this work clarifies the specification of the Java memory mode
Formal Verification of a Constant-Time Preserving C Compiler
Timing side-channels are arguably one of the main sources of
vulnerabilities in cryptographic implementations. One effective
mitigation against timing side-channels is to write programs that do
not perform secret-dependent branches and memory accesses. This
mitigation, known as \u27\u27cryptographic constant-time\u27\u27, is
adopted by several popular cryptographic libraries.
This paper focuses on compilation of cryptographic constant-time
programs, and more specifically on the following question: is the
code generated by a realistic compiler for a constant-time source
program itself provably constant-time? Surprisingly, we answer the
question positively for a mildly modified version of the CompCert
compiler, a formally verified and moderately optimizing compiler for
C. Concretely, we modify the CompCert compiler to eliminate sources
of potential leakage. Then, we instrument the operational semantics
of CompCert intermediate languages so as to be able to capture
cryptographic constant-time. Finally, we prove that the modified
CompCert compiler preserves constant-time. Our mechanization
maximizes reuse of the CompCert correctness proof, through the use
of new proof techniques for proving preservation of constant-time.
These techniques achieve complementary trade-offs between generality
and tractability of proof effort, and are of independent interest
Secure Compilation of Side-Channel Countermeasures: The Case of Cryptographic “Constant-Time”
International audienceSoftware-based countermeasures provide effective mitigation against side-channel attacks, often with minimal efficiency and deployment overheads. Their effectiveness is often amenable to rigorous analysis: specifically, several popular countermeasures can be formalized as information flow policies, and correct implementation of the countermeasures can be verified with state-of-the-art analysis and verification techniques. However , in absence of further justification, the guarantees only hold for the language (source, target, or intermediate representation) on which the analysis is performed. We consider the problem of preserving side-channel countermeasures by compilation for cryptographic "constant-time", a popular countermeasure against cache-based timing attacks. We present a general method, based on the notion of constant-time-simulation, for proving that a compilation pass preserves the constant-time countermeasure. Using the Coq proof assistant, we verify the correctness of our method and of several representative instantiations
Journey Beyond Full Abstraction: Exploring Robust Property Preservation for Secure Compilation
—Good programming languages provide helpful abstractions for writing secure code, but the security properties of
the source language are generally not preserved when compiling a
program and linking it with adversarial code in a low-level target
language (e.g., a library or a legacy application). Linked target
code that is compromised or malicious may, for instance, read and
write the compiled program’s data and code, jump to arbitrary
memory locations, or smash the stack, blatantly violating any
source-level abstraction. By contrast, a fully abstract compilation
chain protects source-level abstractions all the way down, ensuring that linked adversarial target code cannot observe more about
the compiled program than what some linked source code could
about the source program. However, while research in this area
has so far focused on preserving observational equivalence, as
needed for achieving full abstraction, there is a much larger space
of security properties one can choose to preserve against linked
adversarial code. And the precise class of security properties one
chooses crucially impacts not only the supported security goals
and the strength of the attacker model, but also the kind of
protections a secure compilation chain has to introduce.
We are the first to thoroughly explore a large space of formal
secure compilation criteria based on robust property preservation, i.e., the preservation of properties satisfied against arbitrary
adversarial contexts. We study robustly preserving various classes
of trace properties such as safety, of hyperproperties such as
noninterference, and of relational hyperproperties such as trace
equivalence. This leads to many new secure compilation criteria,
some of which are easier to practically achieve and prove than
full abstraction, and some of which provide strictly stronger
security guarantees. For each of the studied criteria we propose an equivalent “property-free” characterization that clarifies
which proof techniques apply. For relational properties and
hyperproperties, which relate the behaviors of multiple programs,
our formal definitions of the property classes themselves are
novel. We order our criteria by their relative strength and show
several collapses and separation results. Finally, we adapt existing
proof techniques to show that even the strongest of our secure
compilation criteria, the robust preservation of all relational
hyperproperties, is achievable for a simple translation from a
statically typed to a dynamically typed language
Lessons from Formally Verified Deployed Software Systems (Extended version)
The technology of formal software verification has made spectacular advances,
but how much does it actually benefit the development of practical software?
Considerable disagreement remains about the practicality of building systems
with mechanically-checked proofs of correctness. Is this prospect confined to a
few expensive, life-critical projects, or can the idea be applied to a wide
segment of the software industry?
To help answer this question, the present survey examines a range of
projects, in various application areas, that have produced formally verified
systems and deployed them for actual use. It considers the technologies used,
the form of verification applied, the results obtained, and the lessons that
can be drawn for the software industry at large and its ability to benefit from
formal verification techniques and tools.
Note: a short version of this paper is also available, covering in detail
only a subset of the considered systems. The present version is intended for
full reference.Comment: arXiv admin note: text overlap with arXiv:1211.6186 by other author
Recommended from our members
Mechanising and evolving the formal semantics of WebAssembly: the Web's new low-level language
WebAssembly is the first new programming language to be supported natively by all major Web browsers since JavaScript. It is designed to be a natural low-level compilation target for languages such as C, C++, and Rust, enabling programs written in these languages to be compiled and executed efficiently on the Web. WebAssembly’s specification is managed by the W3C WebAssembly Working Group (made up of representatives from a number of major tech companies). Uniquely, the language is specified by way of a full pen-and-paper formal semantics.
This thesis describes a number of ways in which I have both helped to shape the specification of WebAssembly, and built upon it. By mechanising the WebAssembly formal semantics in Isabelle/HOL while it was being drafted, I discovered a number of errors in the specification, drove the adoption of official corrections, and provided the first type soundness proof for the corrected language. This thesis also details a verified type checker and interpreter, and a security type system extension for cryptography primitives, all of which have been mechanised as extensions of my initial WebAssembly mechanisation.
A major component of the thesis is my work on the specification of shared memory concurrency in Web languages: correcting and verifying properties of JavaScript’s existing relaxed memory model, and defining the WebAssembly-specific extensions to the corrected model which have been adopted as the basis of WebAssembly’s official threads specification. A number of deficiencies in the original JavaScript model are detailed. Some errors have been corrected, with the verified fixes officially adopted into subsequent editions of the language specification. However one discovered deficiency is fundamental to the model, an instance of the well-known "thin-air problem".
My work demonstrates the value of formalisation and mechanisation in industrial programming language design, not only in discovering and correcting specification errors, but also in building confidence both in the correctness of the language’s design and in the design of proposed extensions.2019 Google PhD Fellowship in Programming Technology and Software Engineering
Peterhouse Research Fellowshi
- …