4,839 research outputs found
Data refinement for true concurrency
The majority of modern systems exhibit sophisticated concurrent behaviour, where several system components modify and observe the system state with fine-grained atomicity. Many systems (e.g., multi-core processors, real-time controllers) also exhibit truly concurrent behaviour, where multiple events can occur simultaneously. This paper presents data refinement defined in terms of an interval-based framework, which includes high-level operators that capture non-deterministic expression evaluation. By modifying the type of an interval, our theory may be specialised to cover data refinement of both discrete and continuous systems. We present an interval-based encoding of forward simulation, then prove that our forward simulation rule is sound with respect to our data refinement definition. A number of rules for decomposing forward simulation proofs over both sequential and parallel composition are developed
A Verified Certificate Checker for Finite-Precision Error Bounds in Coq and HOL4
Being able to soundly estimate roundoff errors of finite-precision
computations is important for many applications in embedded systems and
scientific computing. Due to the discrepancy between continuous reals and
discrete finite-precision values, automated static analysis tools are highly
valuable to estimate roundoff errors. The results, however, are only as correct
as the implementations of the static analysis tools. This paper presents a
formally verified and modular tool which fully automatically checks the
correctness of finite-precision roundoff error bounds encoded in a certificate.
We present implementations of certificate generation and checking for both Coq
and HOL4 and evaluate it on a number of examples from the literature. The
experiments use both in-logic evaluation of Coq and HOL4, and execution of
extracted code outside of the logics: we benchmark Coq extracted unverified
OCaml code and a CakeML-generated verified binary
Reasoning algebraically about refinement on TSO architectures
The Total Store Order memory model is widely implemented by modern multicore architectures such as x86, where local buffers are used for optimisation, allowing limited forms of instruction reordering. The presence of buffers and hardware-controlled buffer flushes increases the level of non-determinism from the level specified by a program, complicating the already difficult task of concurrent programming. This paper presents a new notion of refinement for weak memory models, based on the observation that pending writes to a process' local variables may be treated as if the effect of the update has already occurred in shared memory. We develop an interval-based model with algebraic rules for various programming constructs. In this framework, several decomposition rules for our new notion of refinement are developed. We apply our approach to verify the spinlock algorithm from the literature
Real-time and Probabilistic Temporal Logics: An Overview
Over the last two decades, there has been an extensive study on logical
formalisms for specifying and verifying real-time systems. Temporal logics have
been an important research subject within this direction. Although numerous
logics have been introduced for the formal specification of real-time and
complex systems, an up to date comprehensive analysis of these logics does not
exist in the literature. In this paper we analyse real-time and probabilistic
temporal logics which have been widely used in this field. We extrapolate the
notions of decidability, axiomatizability, expressiveness, model checking, etc.
for each logic analysed. We also provide a comparison of features of the
temporal logics discussed
Simplifying proofs of linearisability using layers of abstraction
Linearisability has become the standard correctness criterion for concurrent
data structures, ensuring that every history of invocations and responses of
concurrent operations has a matching sequential history. Existing proofs of
linearisability require one to identify so-called linearisation points within
the operations under consideration, which are atomic statements whose execution
causes the effect of an operation to be felt. However, identification of
linearisation points is a non-trivial task, requiring a high degree of
expertise. For sophisticated algorithms such as Heller et al's lazy set, it
even is possible for an operation to be linearised by the concurrent execution
of a statement outside the operation being verified. This paper proposes an
alternative method for verifying linearisability that does not require
identification of linearisation points. Instead, using an interval-based logic,
we show that every behaviour of each concrete operation over any interval is a
possible behaviour of a corresponding abstraction that executes with
coarse-grained atomicity. This approach is applied to Heller et al's lazy set
to show that verification of linearisability is possible without having to
consider linearisation points within the program code
Forward Invariant Cuts to Simplify Proofs of Safety
The use of deductive techniques, such as theorem provers, has several
advantages in safety verification of hybrid sys- tems; however,
state-of-the-art theorem provers require ex- tensive manual intervention.
Furthermore, there is often a gap between the type of assistance that a theorem
prover requires to make progress on a proof task and the assis- tance that a
system designer is able to provide. This paper presents an extension to
KeYmaera, a deductive verification tool for differential dynamic logic; the new
technique allows local reasoning using system designer intuition about per-
formance within particular modes as part of a proof task. Our approach allows
the theorem prover to leverage for- ward invariants, discovered using numerical
techniques, as part of a proof of safety. We introduce a new inference rule
into the proof calculus of KeYmaera, the forward invariant cut rule, and we
present a methodology to discover useful forward invariants, which are then
used with the new cut rule to complete verification tasks. We demonstrate how
our new approach can be used to complete verification tasks that lie out of the
reach of existing deductive approaches us- ing several examples, including one
involving an automotive powertrain control system.Comment: Extended version of EMSOFT pape
A Survey of Symbolic Execution Techniques
Many security and software testing applications require checking whether
certain properties of a program hold for any possible usage scenario. For
instance, a tool for identifying software vulnerabilities may need to rule out
the existence of any backdoor to bypass a program's authentication. One
approach would be to test the program using different, possibly random inputs.
As the backdoor may only be hit for very specific program workloads, automated
exploration of the space of possible inputs is of the essence. Symbolic
execution provides an elegant solution to the problem, by systematically
exploring many possible execution paths at the same time without necessarily
requiring concrete inputs. Rather than taking on fully specified input values,
the technique abstractly represents them as symbols, resorting to constraint
solvers to construct actual instances that would cause property violations.
Symbolic execution has been incubated in dozens of tools developed over the
last four decades, leading to major practical breakthroughs in a number of
prominent software reliability applications. The goal of this survey is to
provide an overview of the main ideas, challenges, and solutions developed in
the area, distilling them for a broad audience.
The present survey has been accepted for publication at ACM Computing
Surveys. If you are considering citing this survey, we would appreciate if you
could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing
this survey, we would appreciate if you could use the following BibTeX entry:
http://goo.gl/Hf5Fv
- …