75 research outputs found
An analysis of total correctness refinement models for partial relation semantics I
This is the first of a series of papers devoted to the thorough investigation of (total correctness) refinement based on an underlying partial relational model. In this paper we restrict attention to operation refinement. We explore four theories of refinement based on an underlying partial relation model for specifications, and we show that they are all equivalent. This, in particular, sheds some light on the relational completion operator (lifted-totalisation) due to Wookcock which underlines data refinement in, for example, the specification language Z. It further leads to two simple alternative models which are also equivalent to the others
CSM-361 - A Logic for Schema-based Program Development
We show how a theory of specification refinement and program development can be constructed as a conservative extension of our existing logic for Z. The resulting system can be set up as a development method for Z, or as a generalisation of a refinement calculus (with a novel semantics). In addition to the technical development we illustrate how the theory can be used in practice
UTP, Circus, and Isabelle
We dedicate this paper with great respect and friendship to He Jifeng on the occasion of his 80th birthday. Our research group owes much to him. The authors have over 150 publications on unifying theories of programming (UTP), a research topic Jifeng created with Tony Hoare. Our objective is to recount the history of Circus (a combination of Z, CSP, Dijkstra’s guarded command language, and Morgan’s refinement calculus) and the development of Isabelle/UTP. Our paper is in two parts. (1) We first discuss the activities needed to model systems: we need to formalise data models and their behaviours. We survey our work on these two aspects in the context of Circus. (2) Secondly, we describe our practical implementation of UTP in Isabelle/HOL. Mechanising UTP theories is the basis of novel verification tools. We also discuss ongoing and future work related to (1) and (2). Many colleagues have contributed to these works, and we acknowledge their support
Reasoning about correctness properties of a coordination programming language
Safety critical systems place additional requirements to the programming
language used to implement them with respect to traditional environments.
Examples of features that in
uence the suitability of a programming language
in such environments include complexity of de nitions, expressive
power, bounded space and time and veri ability. Hume is a novel programming
language with a design which targets the rst three of these, in some
ways, contradictory features: fully expressive languages cannot guarantee
bounds on time and space, and low-level languages which can guarantee
space and time bounds are often complex and thus error-phrone. In Hume,
this contradiction is solved by a two layered architecture: a high-level fully
expressive language, is built on top of a low-level coordination language
which can guarantee space and time bounds.
This thesis explores the veri cation of Hume programs. It targets safety
properties, which are the most important type of correctness properties,
of the low-level coordination language, which is believed to be the most
error-prone. Deductive veri cation in Lamport's temporal logic of actions
(TLA) is utilised, in turn validated through algorithmic experiments. This
deductive veri cation is mechanised by rst embedding TLA in the Isabelle
theorem prover, and then embedding Hume on top of this. Veri cation of
temporal invariants is explored in this setting.
In Hume, program transformation is a key feature, often required to guarantee
space and time bounds of high-level constructs. Veri cation of transformations
is thus an integral part of this thesis. The work with both invariant
veri cation, and in particular, transformation veri cation, has pinpointed
several weaknesses of the Hume language. Motivated and in
uenced by
this, an extension to Hume, called Hierarchical Hume, is developed and
embedded in TLA. Several case studies of transformation and invariant veri
cation of Hierarchical Hume in Isabelle are conducted, and an approach
towards a calculus for transformations is examined.James Watt ScholarshipEngineering and Physical Sciences Research Council (EPSRC) Platform grant GR/SO177
Invariant discovery and refinement plans for formal modelling in Event-B
The continuous growth of complex systems makes the development of correct software
increasingly challenging. In order to address this challenge, formal methods o er rigorous
mathematical techniques to model and verify the correctness of systems. Refinement
is one of these techniques. By allowing a developer to incrementally introduce design
details, refinement provides a powerful mechanism for mastering the complexities that
arise when formally modelling systems. Here the focus is on a posit-and-prove style of
refinement, where a design is developed as a series of abstract models introduced via
refinement steps. Each refinement step generates proof obligations which must be discharged
in order to verify its correctness – typically requiring a user to understand the
relationship between modelling and reasoning.
This thesis focuses on techniques to aid refinement-based formal modelling, specifically,
when a user requires guidance in order to overcome a failed refinement step. An integrated
approach has been followed: combining the complementary strengths of bottomup
theory formation, in which theories about domains are built based on basic background
information; and top-down planning, in which meta-level reasoning is used to guide the
search for correct models.
On the theory formation perspective, we developed a technique for the automatic discovery
of invariants. Refinement requires the definition of properties, called invariants,
which relate to the design. Formulating correct and meaningful invariants can be tedious
and a challenging task. A heuristic approach to the automatic discovery of invariants has
been developed building upon simulation, proof-failure analysis and automated theory
formation. This approach exploits the close interplay between modelling and reasoning
in order to provide systematic guidance in tailoring the search for invariants for a given
model.
On the planning perspective, we propose a new technique called refinement plans.
Refinement plans provide a basis for automatically generating modelling guidance when
a step fails but is close to a known pattern of refinement. This technique combines both
modelling and reasoning knowledge, and, contrary to traditional pattern techniques, allow
the analysis of failure and partial matching. Moreover, when the guidance is only partially
instantiated, and it is suitable, refinement plans provide specialised knowledge to further
tailor the theory formation process in an attempt to fully instantiate the guidance.
We also report on a series of experiments undertaken in order to evaluate the approaches
and on the implementation of both techniques into prototype tools. We believe
the techniques presented here allow the developer to focus on design decisions rather than
on analysing low-level proof failures
Specification and verification issues in a process language
PhD ThesisWhile specification formalisms for reactive concurrent systems are now reasonably
well-understood theoretically, they have not yet entered common, widespread
design practice. This motivates the attempt made in this work to enhance the
applicability of an important and popular formal framework: the CSP language,
endowed with a failure-based denotational semantics and a logic for describing
failures of processes.
The identification of behaviour with a set of failures is supported by a convincing
intuitive reason: processes with different failures can be distinguished by easily
realizable experiments. But, most importantly, many interesting systems can be
described and studied in terms of their failures. The main technique employed
for this purpose is a logic in which process expressions are required to satisfy an
assertion with each failure of the behaviour they describe. The theory of complete
partial orders, with its elegant treatment of recursion and fixpoint-based verification,
can be applied to this framework. However, in spite of the advantages
illustrated, the practical applicability of standard failure semantics is impaired by
two weaknesses.
The first is its inability to describe many important systems, constructed by
connecting modules that can exchange values of an infinite set across ports invisible
to the environment. This must often be assumed for design and verification
purposes (e.g. for the many protocols relying upon sequence numbers to cope with
out-of-sequence received messages). Such a deficiency is due to the definition of the
hiding operator in standard failure semantics. This thesis puts forward a solution
based on an interesting technical result about infinite sets of sequences.
Another difficulty with standard failure semantics is its treatment of divergence,
the phenomenon in which some components of a system interact by performing
an infinite, uninterrupted sequence of externally invisible actions. Within failure
semantics, divergence cannot be abstracted from on the basis of the implicit fairness
assumption that, if there is a choice leading out of divergence, it will eventually
be made. This 'fair abstraction' is essential for the verification of many important
systems, including communication protocols. The solution proposed in this thesis is
an extended failure semantics which records refused traces, rather than just actions.
Not only is this approach compatible with fair abstraction, but it also permits, like
ordinary failure semantics, verification in a compositional calculus with fixpoint
induction. Rather interestingly, these results can be obtained outside traditional
fixpoint theory, which cannot be applied in this case. The theory developed is
based on the novel notion of 'trace-based' process functions. These can be shown to
possess a particular fixpoint that, unlike the least fixpoint of traditional treatments,
is compatible with fair abstraction. Moreover, they form a large class, sufficient to
give a compositional denotational semantics to a useful eSP-like process language.
Finally, a logic is proposed in which the properties of a process' extended failures
can be expressed and analyzed; the methods developed are applied to the
verification of two example communication protocols: a toy one and a large case
study inspired by a real transport protocol
- …