202 research outputs found
Deriving real-time action systems with multiple time bands using algebraic reasoning
The verify-while-develop paradigm allows one to incrementally develop programs from their specifications using a series of calculations against the remaining proof obligations. This paper presents a derivation method for real-time systems with realistic constraints on their behaviour. We develop a high-level interval-based logic that provides flexibility in an implementation, yet allows algebraic reasoning over multiple granularities and sampling multiple sensors with delay. The semantics of an action system is given in terms of interval predicates and algebraic operators to unify the logics for an action system and its properties, which in turn simplifies the calculations and derivations
Reasoning algebraically about refinement on TSO architectures
The Total Store Order memory model is widely implemented by modern multicore architectures such as x86, where local buffers are used for optimisation, allowing limited forms of instruction reordering. The presence of buffers and hardware-controlled buffer flushes increases the level of non-determinism from the level specified by a program, complicating the already difficult task of concurrent programming. This paper presents a new notion of refinement for weak memory models, based on the observation that pending writes to a process' local variables may be treated as if the effect of the update has already occurred in shared memory. We develop an interval-based model with algebraic rules for various programming constructs. In this framework, several decomposition rules for our new notion of refinement are developed. We apply our approach to verify the spinlock algorithm from the literature
Defining correctness conditions for concurrent objects in multicore architectures
Correctness of concurrent objects is defined in terms of conditions that determine allowable relationships between histories of a concurrent object and those of the corresponding sequential object. Numerous correctness conditions have been proposed over the years, and more have been proposed recently as the algorithms implementing concurrent objects have been adapted to cope with multicore processors with relaxed memory architectures. We present a formal framework for defining correctness conditions for multicore architectures, covering both standard conditions for totally ordered memory and newer conditions for relaxed
memory, which allows them to be expressed in uniform manner, simplifying comparison. Our framework distinguishes between order and commitment properties, which in turn enables a hierarchy of correctness conditions to be established. We consider the Total Store Order (TSO) memory model in detail, formalise known conditions for TSO using our framework, and develop sequentially consistent variations of these. We present a work-stealing deque for TSO memory that is not linearizable, but is correct with respect to these new conditions. Using our framework, we identify a new non-blocking compositional condition, fence consistency, which lies between known conditions for TSO, and aims to capture the intention of a programmer-specified fence
Concurrent Program Design in the Extended Theory of Owicki and Gries
Feijen and van Gasteren have shown how to use the theory of Owicki and Gries to design concurrent programs, however, the lack of a formal theory of progress has meant that these designs are driven entirely by safety requirements. Proof of progress requirements are made post-hoc to the derivation and are operational in nature. In this paper, we describe the use of an extended theory of Owicki and Gries in concurrent program design. The extended theory incorporates a logic of progress, which provides opportunity to develop a program in a manner that gives proper consideration to progress requirements. Dekker's algorithm for two process mutual exclusion is chosen to illustrate the use of the extended theory
Using coarse-grained abstractions to verify linearizability on TSO architectures
Most approaches to verifying linearizability assume a sequentially consistent memory model, which is not always realised in practice. In this paper we study correctness on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in x86 multicore architectures. Our central result is a proof method that simplifies proofs of linearizability on TSO. This is necessary since the use of local buffers in TSO adds considerably to the verification overhead on top of the already subtle linearizability proofs. The proof method involves constructing a coarse-grained abstraction as an intermediate layer between an abstract description and the concurrent algorithm. This allows the linearizability proof to be split into two smaller components, where the effect of the local buffers in TSO is dealt with at a higher level of abstraction than it would have been otherwise
Proving opacity of a pessimistic STM
Transactional Memory (TM) is a high-level programming abstraction for concurrency control that provides
programmers with the illusion of atomically executing blocks of code, called transactions. TMs come in
two categories, optimistic and pessimistic, where in the latter transactions never abort. While this simplifies
the programming model, high-performing pessimistic TMs can complex.
In this paper, we present the first formal verification of a pessimistic software TM algorithm, namely,
an algorithm proposed by Matveev and Shavit. The correctness criterion used is opacity, formalising the
transactional atomicity guarantees. We prove that this pessimistic TM is a refinement of an intermediate
opaque I/O-automaton, known as TMS2. To this end, we develop a rely-guarantee approach for reducing
the complexity of the proof. Proofs are mechanised in the interactive prover Isabelle
Degree of Milling Effect on Cold Water Rice Quality
The aim of this study was to examine the effects of degree of milling on various rice parameters such as proximate composition, and cooking properties using mathematical model. The experiments were performed in the laboratory of Food Research Division, Nepal Agricultural Research Council. The three different medium type rice varieties of Nepal (Lumle-2, Chhomrong and Machhapuchre-3) were exposed to five different degrees of milling (0%, 6%, 8%, 10% and 12%). The degree of milling (DM) level significantly (P≤0.05) affected the milling recovery; head rice yield, nutrient content as well as cooking properties of the rice. Increase in DM resulted in further reduction of protein content, fat content, minerals, milled rice and head rice yield after bran layer was further removed. A positive correlation between DM used in present model, amylose content, kernel elongation and gruel solid loss was observed, however, with an increase in DM; amylose content, kernel elongation and gruel solid loss were found to be increased. Adopting 6 to 8% DM for commercial milling of rice might help to prevent quantitative, qualitative and nutritional loss along with retention of good cooking characteristics
A discrete geometric model of concurrent program execution
A trace of the execution of a concurrent object-oriented program can be displayed in two-dimensions as a diagram of a non-metric finite geometry. The actions of a programs are represented by points, its objects and threads by vertical lines, its transactions by horizontal lines, its communications and resource sharing by sloping arrows, and its partial traces by rectangular figures. We prove informally that the geometry satisfies the laws of Concurrent Kleene Algebra (CKA); these describe and justify the interleaved implementation of multithreaded programs on computer systems with a lesser number of concurrent processors. More familiar forms of semantics (e.g., verification-oriented and operational) can be derived from CKA. Programs are represented as sets of all their possible traces of execution, and non-determinism is introduced as union of these sets. The geometry is extended to multiple levels of abstraction and granularity; a method call at a higher level can be modelled by a specification of the method body, which is implemented at a lower level. The final section describes how the axioms and definitions of the geometry have been encoded in the interactive proof tool Isabelle, and reports on progress towards automatic checking of the proofs in the paper
An Interval Logic for Stream-Processing Functions: A Convolution-Based Construction
We develop an interval-based logic for reasoning about systems consisting of components speci ed using stream-processing functions, which map streams of inputs to streams of outputs. The construction is algebraic and builds on a theory of convolution from formal power series. Using these algebraic foundations, we uniformly (and systematically) de ne operators for time- and space-based (de)composition. We also show that Banach's xed point theory can be incorporated into the framework, building on an existing theory of partially ordered monoids, which enables a feedback operator to be de ned algebraically.This research is supported by EPSRC Grant EP/N016661/1
- …
