383 research outputs found
Fair Testing
In this paper we present a solution to the long-standing problem of characterising the coarsest liveness-preserving pre-congruence with respect to a full (TCSP-inspired) process algebra. In fact, we present two distinct characterisations, which give rise to the same relation: an operational one based on a De Nicola-Hennessy-like testing modality which we call should-testing, and a denotational one based on a refined notion of failures. One of the distinguishing characteristics of the should-testing pre-congruence is that it abstracts from divergences in the same way as Milner¿s observation congruence, and as a consequence is strictly coarser than observation congruence. In other words, should-testing has a built-in fairness assumption. This is in itself a property long sought-after; it is in notable contrast to the well-known must-testing of De Nicola and Hennessy (denotationally characterised by a combination of failures and divergences), which treats divergence as catrastrophic and hence is incompatible with observation congruence. Due to these characteristics, should-testing supports modular reasoning and allows to use the proof techniques of observation congruence, but also supports additional laws and techniques. Moreover, we show decidability of should-testing (on the basis of the denotational characterisation). Finally, we demonstrate its advantages by the application to a number of examples, including a scheduling problem, a version of the Alternating Bit-protocol, and fair lossy communication channel
Integrating model checking with HiP-HOPS in model-based safety analysis
The ability to perform an effective and robust safety analysis on the design of modern safety–critical systems is crucial. Model-based safety analysis (MBSA) has been introduced in recent years to support the assessment of complex system design by focusing on the system model as the central artefact, and by automating the synthesis and analysis of failure-extended models. Model checking and failure logic synthesis and analysis (FLSA) are two prominent MBSA paradigms. Extensive research has placed emphasis on the development of these techniques, but discussion on their integration remains limited. In this paper, we propose a technique in which model checking and Hierarchically Performed Hazard Origin and Propagation Studies (HiP-HOPS) – an advanced FLSA technique – can be applied synergistically with benefit for the MBSA process. The application of the technique is illustrated through an example of a brake-by-wire system
Keeping Fairness Alive : Design and formal verification of optimistic fair exchange protocols
Fokkink, W.J. [Promotor]Pol, J.C. van de [Promotor
Foundations of the B method
B is a method for specifying, designing and coding software systems. It is based on Zermelo-Fraenkel set theory with the axiom of choice, the concept of generalized substitution and on structuring mechanisms (machine, refinement, implementation). The concept of refinement is the key notion for developing B models of (software) systems in an incremental way. B models are accompanied by mathematical proofs that justify them. Proofs of B models convince the user (designer or specifier) that the (software) system is effectively correct. We provide a survey of the underlying logic of the B method and the semantic concepts related to the B method; we detail the B development process partially supported by the mechanical engine of the prover
Algorithmic Mechanism Construction bridging Secure Multiparty Computation and Intelligent Reasoning
This work presents the construction of intelligent algorithmic mechanism based on multidimensional view of intelligent reasoning, threat analytics, cryptographic solutions and secure multiparty computation. It is basically an attempt of the cross fertilization of distributed AI, algorithmic game theory and cryptography. The mechanism evaluates innate and adaptive system immunity in terms of collective, machine, collaborative, business and security intelligence. It also shows the complexity analysis of the mechanism and experimental results on three test cases: (a) intrusion detection, (b) adaptively secure broadcast and (c) health security
IST Austria Thesis
Motivated by the analysis of highly dynamic message-passing systems, i.e. unbounded thread creation, mobility, etc. we present a framework for the analysis of depth-bounded systems. Depth-bounded systems are one of the most expressive known fragment of the π-calculus for which interesting verification problems are still decidable. Even though they are infinite state systems depth-bounded systems are well-structured, thus can be analyzed algorithmically. We give an interpretation of depth-bounded systems as graph-rewriting systems. This gives more flexibility and ease of use to apply depth-bounded systems to other type of systems like shared memory concurrency.
First, we develop an adequate domain of limits for depth-bounded systems, a prerequisite for the effective representation of downward-closed sets. Downward-closed sets are needed by forward saturation-based algorithms to represent potentially infinite sets of states. Then, we present an abstract interpretation framework to compute the covering set of well-structured transition systems. Because, in general, the covering set is not computable, our abstraction over-approximates the actual covering set. Our abstraction captures the essence of acceleration based-algorithms while giving up enough precision to ensure convergence. We have implemented the analysis in the PICASSO tool and show that it is accurate in practice. Finally, we build some further analyses like termination using the covering set as starting point
Recommended from our members
Decentralised computer systems
The architecture of the Web was designed to enable decentralised exchange of information. Early architects envisioned an egalitarian yet organic society thriving in cyberspace. The reality of the Web today, unfortunately, does not bear out these visions: information networks have repeatedly shown a tendency towards consolidation and centralisation with the current Web split between a handful of large corporations.
The advent of Bitcoin and successor blockchain networks re-ignited interest in developing alternatives to the centralised Web and paving a way back to the earlier architectural visions for the Web. This has led to immense hype around these technologies with the cryptocurrency market valued at several hundred billions of dollars at the time of writing. With great hype, apparently, come great scams. I start off by analysing the use of Bitcoin as an enabler for crime and then present both technical solutions as well as policy recommendations to mitigate the harm these crimes cause.
These policy recommendations then lead us on to look more closely at cryptocurrency's tamer cousin: permissioned blockchains. These systems, while less revolutionary in their premise, nevertheless aim to provide sweeping improvements in the efficiency and transparency of existing enterprise systems. To see whether they work in practice, I present the results of my work in delivering a production permissioned blockchain system to real users. This involves comparing several permissioned blockchain systems, exploring their deficiencies and developing solutions for the most egregious of those.
Lastly, I do a deep dive into one of the most persistent technical issues with permissioned blockchains, and decentralised networks in general: the lack of scalability in their consensus mechanisms. I present two novel consensus algorithms that aim to improve upon the state of the art in several ways. The first is designed to enable existing permissioned blockchain networks to scale to thousands of nodes. The second presents an entirely new way of building decentralised consensus systems utilising a trie-based data structure at its core as opposed to the usual linear ledgers used in current systems
- …