31 research outputs found
A Theory of Termination via Indirection
Step-indexed models provide approximations to a class of domain
equations and can prove type safety, partial correctness, and program
equivalence; however, a common misconception is that they
are inapplicable to liveness problems. We disprove this by applying
step-indexing to develop the first Hoare logic of total correctness
for a language with function pointers and semantic assertions.
In fact, from a liveness perspective, our logic is stronger: we verify
explicit time resource bounds. We apply our logic to examples containing
nontrivial "higher-order" uses of function pointers and we
prove soundness with respect to a standard operational semantics.
Our core technique is very compact and may be applicable to other
liveness problems. Our results are machine checked in Coq
Static Conflict Detection for a Policy Language
We present a static control flow analysis used in the Simple Unified Policy Programming Language (SUPPL) compiler to detect internally inconsistent policies. For example, an access control policy can decide to both “allow” and “deny” access for a user; such an inconsistency is called a conflict. Policies in Suppl. follow the Event-Condition-Action paradigm; predicates are used to model conditions and event handlers are written in an imperative way. The analysis is twofold; it first computes a superset of all conflicts by looking for a combination of actions in the event handlers that might violate a user-supplied definition of conflicts. SMT solvers are then used to try to rule out the combinations that cannot possibly be executed. The analysis is formally proven sound in Coq in the sense that no actual conflict will be ruled out by the SMT solvers. Finally, we explain how we try to show the user what causes the conflicts, to make them easier to solve
A list-machine benchmark for mechanized metatheory
International audienceWe propose a benchmark to compare theorem-proving systems on their ability to express proofs of compiler correctness. In contrast to the first POPLmark, we emphasize the connection of proofs to compiler implementations, and we point out that much can be done without binders or alpha-conversion. We propose specific criteria for evaluating the utility of mechanized metatheory systems; we have constructed solutions in both Coq and Twelf metatheory, and we draw conclusions about those two systems in particular
Step-Indexed Normalization for a Language with General Recursion
The Trellys project has produced several designs for practical dependently
typed languages. These languages are broken into two
fragments-a_logical_fragment where every term normalizes and which is
consistent when interpreted as a logic, and a_programmatic_fragment with
general recursion and other convenient but unsound features. In this paper, we
present a small example language in this style. Our design allows the
programmer to explicitly mention and pass information between the two
fragments. We show that this feature substantially complicates the metatheory
and present a new technique, combining the traditional Girard-Tait method with
step-indexed logical relations, which we use to show normalization for the
logical fragment.Comment: In Proceedings MSFP 2012, arXiv:1202.240
Operational Refinement for Compiler Correctness
Compilers are an essential part of the software development
process. Programmers all over the world rely on compilers every day to correctly translate their intentions, expressed as high-level source code, into executable low-level machine code. But what does it mean for a compiler to be correct?
This question is surprisingly difficult to answer. Despite the fact that various groups have made concerted efforts to prove the correctness of compilers since at least the early 1980's, no clear consensus has arisen about what it means for a compiler to be correct. As a result, it seems that no two compiler verification efforts have stated their correctness theorems in the same way.
In this dissertation, I will advance a new approach to compiler correctness based on refinements of the operational semantics of programs. The cornerstones of this approach are behavioral refinement, which allows programs to improve by going wrong less often, and choice refinement, which allows compilers to reduce the amount of internal nondeterminism present in a program. I take particular care to explain why these notions of refinement are the correct formal interpretations of the informal ideas above.
In addition, I will show how these notions of refinement can be realistically applied to compiler verification efforts. First, I will present a toy language, WHILE-C, and show how choice and behavioral refinement can be used to verify the correctness of several interesting program transformations. The WHILE-C language and the transformations themselves are simple enough to be presented here in full detail. I will also show how the ideas of behavioral and choice refinement may be applied to the CompCert formally verified compiler, a realistic compiler for a significant subset of C