8,415 research outputs found
First steps towards the certification of an ARM simulator using Compcert
The simulation of Systems-on-Chip (SoC) is nowadays a hot topic because,
beyond providing many debugging facilities, it allows the development of
dedicated software before the hardware is available. Low-consumption CPUs such
as ARM play a central role in SoC. However, the effectiveness of simulation
depends on the faithfulness of the simulator. To this effect, we propose here
to prove significant parts of such a simulator, SimSoC. Basically, on one hand,
we develop a Coq formal model of the ARM architecture while on the other hand,
we consider a version of the simulator including components written in
Compcert-C. Then we prove that the simulation of ARM operations, according to
Compcert-C formal semantics, conforms to the expected formal model of ARM. Size
issues are partly dealt with using automatic generation of significant parts of
the Coq model and of SimSoC from the official textual definition of ARM.
However, this is still a long-term project. We report here the current stage of
our efforts and discuss in particular the use of Compcert-C in this framework.Comment: First International Conference on Certified Programs and Proofs 7086
(2011
On the correctness of a branch displacement algorithm
The branch displacement problem is a well-known problem in assembler design. It revolves around the feature, present in several processor families, of having different instructions, of different sizes, for jumps of different displacements. The problem, which is provably NP-hard, is then to select the instructions such that one ends up with the smallest possible program.
During our research with the CerCo project on formally verifying a C compiler, we have implemented and proven correct an algorithm for this problem. In this paper, we discuss the problem, possible solutions, our specific solutions and the proofs
Formalizing Size-Optimal Sorting Networks: Extracting a Certified Proof Checker
Since the proof of the four color theorem in 1976, computer-generated proofs
have become a reality in mathematics and computer science. During the last
decade, we have seen formal proofs using verified proof assistants being used
to verify the validity of such proofs.
In this paper, we describe a formalized theory of size-optimal sorting
networks. From this formalization we extract a certified checker that
successfully verifies computer-generated proofs of optimality on up to 8
inputs. The checker relies on an untrusted oracle to shortcut the search for
witnesses on more than 1.6 million NP-complete subproblems.Comment: IMADA-preprint-c
A formally verified compiler back-end
This article describes the development and formal verification (proof of
semantic preservation) of a compiler back-end from Cminor (a simple imperative
intermediate language) to PowerPC assembly code, using the Coq proof assistant
both for programming the compiler and for proving its correctness. Such a
verified compiler is useful in the context of formal methods applied to the
certification of critical software: the verification of the compiler guarantees
that the safety properties proved on the source code hold for the executable
compiled code as well
Trading-Off Type-Inference Memory Complexity against Communication
While bringing considerable flexibility and extending the horizons of mobile computing, mobile code raises major security issues. Hence, mobile code, such as Java applets, needs to be analyzed before execution. The byte-code verifier checks low-level security properties that ensure that the downloaded code cannot bypass the virtual machine's security mechanisms. One of the statically ensured..
Open Transactions on Shared Memory
Transactional memory has arisen as a good way for solving many of the issues
of lock-based programming. However, most implementations admit isolated
transactions only, which are not adequate when we have to coordinate
communicating processes. To this end, in this paper we present OCTM, an
Haskell-like language with open transactions over shared transactional memory:
processes can join transactions at runtime just by accessing to shared
variables. Thus a transaction can co-operate with the environment through
shared variables, but if it is rolled-back, also all its effects on the
environment are retracted. For proving the expressive power of TCCS we give an
implementation of TCCS, a CCS-like calculus with open transactions
Liveness-Driven Random Program Generation
Randomly generated programs are popular for testing compilers and program
analysis tools, with hundreds of bugs in real-world C compilers found by random
testing. However, existing random program generators may generate large amounts
of dead code (computations whose result is never used). This leaves relatively
little code to exercise a target compiler's more complex optimizations.
To address this shortcoming, we introduce liveness-driven random program
generation. In this approach the random program is constructed bottom-up,
guided by a simultaneous structural data-flow analysis to ensure that the
generator never generates dead code.
The algorithm is implemented as a plugin for the Frama-C framework. We
evaluate it in comparison to Csmith, the standard random C program generator.
Our tool generates programs that compile to more machine code with a more
complex instruction mix.Comment: Pre-proceedings paper presented at the 27th International Symposium
on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur,
Belgium, 10-12 October 2017 (arXiv:1708.07854
Efficient Set Sharing Using ZBDDs
Set sharing is an abstract domain in which each concrete object is represented by the set of local variables from which it might be reachable. It is a useful abstraction to detect parallelism opportunities, since it contains definite information about which variables do not share in memory, i.e., about when the memory regions reachable from those variables are disjoint. Set sharing is a more precise alternative to pair sharing, in which each domain element is a set of all pairs of local variables from which a common object may be reachable. However, the exponential complexity of some set sharing operations has limited its wider application. This work introduces an efficient implementation of the set sharing domain using Zero-suppressed Binary Decision Diagrams (ZBDDs). Because ZBDDs were designed to represent sets of combinations (i.e., sets of sets), they naturally represent elements of the set sharing domain. We show how to synthesize the operations needed in the set sharing transfer functions from basic ZBDD operations. For some of the operations, we devise custom ZBDD algorithms that perform better in practice. We also compare our implementation of the abstract domain with an efficient, compact, bit set-based alternative, and show that the ZBDD version scales better in terms of both memory usage and running time
Optimizing a Certified Proof Checker for a Large-Scale Computer-Generated Proof
In recent work, we formalized the theory of optimal-size sorting networks
with the goal of extracting a verified checker for the large-scale
computer-generated proof that 25 comparisons are optimal when sorting 9 inputs,
which required more than a decade of CPU time and produced 27 GB of proof
witnesses. The checker uses an untrusted oracle based on these witnesses and is
able to verify the smaller case of 8 inputs within a couple of days, but it did
not scale to the full proof for 9 inputs. In this paper, we describe several
non-trivial optimizations of the algorithm in the checker, obtained by
appropriately changing the formalization and capitalizing on the symbiosis with
an adequate implementation of the oracle. We provide experimental evidence of
orders of magnitude improvements to both runtime and memory footprint for 8
inputs, and actually manage to check the full proof for 9 inputs.Comment: IMADA-preprint-c
- …