23,196 research outputs found
Theorem proving support in programming language semantics
We describe several views of the semantics of a simple programming language
as formal documents in the calculus of inductive constructions that can be
verified by the Coq proof system. Covered aspects are natural semantics,
denotational semantics, axiomatic semantics, and abstract interpretation.
Descriptions as recursive functions are also provided whenever suitable, thus
yielding a a verification condition generator and a static analyser that can be
run inside the theorem prover for use in reflective proofs. Extraction of an
interpreter from the denotational semantics is also described. All different
aspects are formally proved sound with respect to the natural semantics
specification.Comment: Propos\'e pour publication dans l'ouvrage \`a la m\'emoire de Gilles
Kah
Formal Verification of Security Protocol Implementations: A Survey
Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approac
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
Research in mathematical theory of computation
Research progress in the following areas is reviewed: (1) new version of computer program LCF (logic for computable functions) including a facility to search for proofs automatically; (2) the description of the language PASCAL in terms of both LCF and in first order logic; (3) discussion of LISP semantics in LCF and attempt to prove the correctness of the London compilers in a formal way; (4) design of both special purpose and domain independent proving procedures specifically program correctness in mind; (5) design of languages for describing such proof procedures; and (6) the embedding of ideas in the first order checker
Higher-Order Termination: from Kruskal to Computability
Termination is a major question in both logic and computer science. In logic,
termination is at the heart of proof theory where it is usually called strong
normalization (of cut elimination). In computer science, termination has always
been an important issue for showing programs correct. In the early days of
logic, strong normalization was usually shown by assigning ordinals to
expressions in such a way that eliminating a cut would yield an expression with
a smaller ordinal. In the early days of verification, computer scientists used
similar ideas, interpreting the arguments of a program call by a natural
number, such as their size. Showing the size of the arguments to decrease for
each recursive call gives a termination proof of the program, which is however
rather weak since it can only yield quite small ordinals. In the sixties, Tait
invented a new method for showing cut elimination of natural deduction, based
on a predicate over the set of terms, such that the membership of an expression
to the predicate implied the strong normalization property for that expression.
The predicate being defined by induction on types, or even as a fixpoint, this
method could yield much larger ordinals. Later generalized by Girard under the
name of reducibility or computability candidates, it showed very effective in
proving the strong normalization property of typed lambda-calculi..
Modular, Fully-abstract Compilation by Approximate Back-translation
A compiler is fully-abstract if the compilation from source language programs
to target language programs reflects and preserves behavioural equivalence.
Such compilers have important security benefits, as they limit the power of an
attacker interacting with the program in the target language to that of an
attacker interacting with the program in the source language. Proving compiler
full-abstraction is, however, rather complicated. A common proof technique is
based on the back-translation of target-level program contexts to
behaviourally-equivalent source-level contexts. However, constructing such a
back- translation is problematic when the source language is not strong enough
to embed an encoding of the target language. For instance, when compiling from
STLC to ULC, the lack of recursive types in the former prevents such a
back-translation.
We propose a general and elegant solution for this problem. The key insight
is that it suffices to construct an approximate back-translation. The
approximation is only accurate up to a certain number of steps and conservative
beyond that, in the sense that the context generated by the back-translation
may diverge when the original would not, but not vice versa. Based on this
insight, we describe a general technique for proving compiler full-abstraction
and demonstrate it on a compiler from STLC to ULC. The proof uses asymmetric
cross-language logical relations and makes innovative use of step-indexing to
express the relation between a context and its approximate back-translation.
The proof extends easily to common compiler patterns such as modular
compilation and it, to the best of our knowledge, it is the first compiler full
abstraction proof to have been fully mechanised in Coq. We believe this proof
technique can scale to challenging settings and enable simpler, more scalable
proofs of compiler full-abstraction
A Weakest Pre-Expectation Semantics for Mixed-Sign Expectations
We present a weakest-precondition-style calculus for reasoning about the
expected values (pre-expectations) of \emph{mixed-sign unbounded} random
variables after execution of a probabilistic program. The semantics of a
while-loop is well-defined as the limit of iteratively applying a functional to
a zero-element just as in the traditional weakest pre-expectation calculus,
even though a standard least fixed point argument is not applicable in this
context. A striking feature of our semantics is that it is always well-defined,
even if the expected values do not exist. We show that the calculus is sound,
allows for compositional reasoning, and present an invariant-based approach for
reasoning about pre-expectations of loops
- …