46 research outputs found
Integration of analysis techniques in security and fault-tolerance
This thesis focuses on the study of integration of formal methodologies in security protocol analysis and fault-tolerance analysis. The research is developed in two different directions: interdisciplinary and intra-disciplinary. In the former, we look for a beneficial interaction between strategies of analysis in security protocols and fault-tolerance; in the latter, we search for connections among different approaches of analysis within the security area. In the following we summarize the main results of the research
Synthesizing realistic verification tasks
This thesis by publications focuses on realistic benchmarks for software verification approaches.
Such benchmarks are crucial to an evaluation of verification tools which helps to assess their capabilities and inform potential users.
This work provides an overview of the current landscape of verification tool evaluation and compares manual and automatic approaches to benchmark generation.
The main contribution of this thesis is a new framework to synthesize realistic verification tasks.
This framework allows to generate verification tasks that target sequential or parallel programs.
Starting from a realistic formal specification,
a Büchi automaton is synthesized while ensuring realistic hardness characteristics such as the number of computation steps after which errors occur.
The resulting automaton is then transformed to a Mealy machine to produce a sequential program in C or Java or to a parallel composition of modal transition systems. A refinement of the latter is encoded in Promela or as a Petri net.
A task that targets such a parallel system requires checking whether or not a given interruptible temporal property is satisfied or whether parallel systems are weakly bisimilar.
Temporal properties may include branching-time and linear-time formulas.
For the latter, it can be ensured that every parallel component matters during verification.
This thesis contains additional contributions that build on top of attached publications.
These are (i) a generalization of interruptibility that covers branching-time properties, (ii) an improved generation of parallel contexts, and (iii) a definition of alphabet extension on a semantic level.
Alphabet extensions are a key part for ensuring hardness of generated tasks that target parallel systems.
Benchmarks that were synthesized using the presented framework have been employed in the international Rigorous Examination of Reactive Systems (RERS) Challenge during the last five years.
Several international teams attempted to solve the corresponding verification tasks and used ten different tools to verify the newly added parallel programs.
Apart from the evaluation of these tools, this endeavor motivated participants of RERS to conceive new formal techniques to verify parallel systems.
The result of this thesis thus helps to improve the state of the art of software verification
Reliable massively parallel symbolic computing : fault tolerance for a distributed Haskell
As the number of cores in manycore systems grows exponentially, the number of failures is
also predicted to grow exponentially. Hence massively parallel computations must be able to
tolerate faults. Moreover new approaches to language design and system architecture are needed
to address the resilience of massively parallel heterogeneous architectures.
Symbolic computation has underpinned key advances in Mathematics and Computer Science,
for example in number theory, cryptography, and coding theory. Computer algebra software
systems facilitate symbolic mathematics. Developing these at scale has its own distinctive
set of challenges, as symbolic algorithms tend to employ complex irregular data and control
structures. SymGridParII is a middleware for parallel symbolic computing on massively parallel
High Performance Computing platforms. A key element of SymGridParII is a domain specific
language (DSL) called Haskell Distributed Parallel Haskell (HdpH). It is explicitly designed for
scalable distributed-memory parallelism, and employs work stealing to load balance dynamically
generated irregular task sizes.
To investigate providing scalable fault tolerant symbolic computation we design, implement
and evaluate a reliable version of HdpH, HdpH-RS. Its reliable scheduler detects and handles
faults, using task replication as a key recovery strategy. The scheduler supports load balancing
with a fault tolerant work stealing protocol. The reliable scheduler is invoked with two fault
tolerance primitives for implicit and explicit work placement, and 10 fault tolerant parallel
skeletons that encapsulate common parallel programming patterns. The user is oblivious to
many failures, they are instead handled by the scheduler.
An operational semantics describes small-step reductions on states. A simple abstract machine
for scheduling transitions and task evaluation is presented. It defines the semantics of
supervised futures, and the transition rules for recovering tasks in the presence of failure. The
transition rules are demonstrated with a fault-free execution, and three executions that recover
from faults.
The fault tolerant work stealing has been abstracted in to a Promela model. The SPIN
model checker is used to exhaustively search the intersection of states in this automaton to
validate a key resiliency property of the protocol. It asserts that an initially empty supervised
future on the supervisor node will eventually be full in the presence of all possible combinations
of failures.
The performance of HdpH-RS is measured using five benchmarks. Supervised scheduling
achieves a speedup of 757 with explicit task placement and 340 with lazy work stealing when
executing Summatory Liouville up to 1400 cores of a HPC architecture. Moreover, supervision
overheads are consistently low scaling up to 1400 cores. Low recovery overheads are observed in
the presence of frequent failure when lazy on-demand work stealing is used. A Chaos Monkey
mechanism has been developed for stress testing resiliency with random failure combinations.
All unit tests pass in the presence of random failure, terminating with the expected results
Model checking quantum protocols
This thesis describes model checking techniques for protocols arising in quantum information
theory and quantum cryptography. We discuss the theory and implementation of a practical
model checker, QMC, for quantum protocols. In our framework, we assume that the quantum
operations performed in a protocol are restricted to those within the stabilizer formalism; while
this particular set of operations is not universal for quantum computation, it allows us to develop
models of several useful protocols as well as of systems involving both classical and quantum
information processing. We detail the syntax, semantics and type system of QMC’s modelling
language, the logic QCTL which is used for verification, and the verification algorithms that have
been implemented in the tool. We demonstrate our techniques with applications to a number of
case studies
Formal verification techniques for model transformations: A tridimensional classification
In Model Driven Engineering (Mde), models are first-class citizens, and model transformation is Mde's "heart and soul". Since model transformations are executed for a family of (conforming) models, their validity becomes a crucial issue. This paper proposes to explore the question of the formal verification of model transformation properties through a tridimensional approach: the transformation involved, the properties of interest addressed, and the formal verification techniques used to establish the properties. This work is intended for a double audience. For newcomers, it provides a tutorial introduction to the field of formal verification of model transformations. For readers more familiar with formal methods and model transformations, it proposes a literature review (although not systematic) of the contributions of the field. Overall, this work allows to better understand the evolution, trends and current practice in the domain of model transformation verification. This work opens an interesting research line for building an engineering of model transformation verification guided by the notion of model transformation intent
Formal analysis of confidentiality conditions related to data leakage
The size of the financial risk, the social repercussions and the legal ramifications resulting from data leakage are of great concern. Some experts believe that poor system designs are to blame. The goal of this thesis is to use applied formal methods to verify that data leakage related confidentiality properties of system designs are satisfied. This thesis presents a practically applicable approach for using Banks's confidentiality framework, instantiated using the Circus notation.
The thesis proposes a tool-chain for mechanizing the application of the framework and includes a custom tool and the Isabelle theorem prover that coordinate to verify a given system model. The practical applicability of the mechanization was evaluated by analysing a number of hand-crafted systems having literature related confidentiality requirements.
Without any reliable tool for using BCF or any Circus tool that can be extended for the same purpose, it was necessary to build a custom tool. Further, a lack of literature related descriptive case studies on confidentiality in systems compelled us to use hand-written system specifications with literature related confidentiality requirements.
The results of this study show that the tool-chain proposed in this thesis is practically applicable in terms of time required. Further, the efficiency of the proposed tool-chain has been shown by comparing the time taken for analysing a system both using the mechanised approach as well as the manual approach
Recommended from our members
Using formal methods to support testing
Formal methods and testing are two important approaches that assist in the development of high quality software. While traditionally these approaches have been seen as rivals, in recent
years a new consensus has developed in which they are seen as complementary. This article reviews the state of the art regarding ways in which the presence of a formal specification can be used to assist testing
Model Checking and Model-Based Testing : Improving Their Feasibility by Lazy Techniques, Parallelization, and Other Optimizations
This thesis focuses on the lightweight formal method of model-based testing for checking safety properties, and derives a new and more feasible approach.
For liveness properties, dynamic testing is impossible, so feasibility is increased by specializing on an important class of properties, livelock freedom, and deriving a more feasible model checking algorithm for it.
All mentioned improvements are substantiated by experiments
SAVCBS 2003: Specification and Verification of Component-Based Systems
These are the proceedings for the SAVCBS 2003 workshop. This workshop was held at ESEC/FSE 2003 in Helsinki Finland in September 2003