914 research outputs found
Resource allocation and scheduling of multiple composite web services in cloud computing using cooperative coevolution genetic algorithm
In cloud computing, resource allocation and scheduling of multiple composite web services is an important and challenging problem. This is especially so in a hybrid cloud where there may be some low-cost resources available from private clouds and some high-cost resources from public clouds. Meeting this challenge involves two classical computational problems: one is assigning resources to each of the tasks in the composite web services; the other is scheduling the allocated resources when each resource may be used by multiple tasks at different points of time. In addition, Quality-of-Service (QoS) issues, such as execution time and running costs, must be considered in the resource allocation and scheduling problem. Here we present a Cooperative Coevolutionary Genetic Algorithm (CCGA) to solve the deadline-constrained resource allocation and scheduling problem for multiple composite web services. Experimental results show that our CCGA is both efficient and scalable
A Survey of Verification Techniques for Security Protocols
Security protocols aim to allow secure electronic communication despite the potential presence of eavesdroppers. Guaranteeing their correctness is vital in many applications. This report briefly surveys the many formal specification and verification techniques proposed for describing and analysing security protocols
Compilation of Specifications
Computer software now controls critical systems worldwide. International standards require such programs to be produced from mathematically-precise specifications, but the techniques and tools involved are highly complex and unfamiliar to most programmers. We present a formal basis for extending a tool already used by software developers, the program compiler, to undertake much of the task automatically. This is done by devising a code generation strategy, based on program refinement theory, capable of translating specification constructs embedded in programs into executable code, without the need for programmer intervention
On reducing the complexity of matrix clocks
Matrix clocks are a generalization of the notion of vector clocks that allows
the local representation of causal precedence to reach into an asynchronous
distributed computation's past with depth , where is an integer.
Maintaining matrix clocks correctly in a system of nodes requires that
everymessage be accompanied by numbers, which reflects an exponential
dependency of the complexity of matrix clocks upon the desired depth . We
introduce a novel type of matrix clock, one that requires only numbers to
be attached to each message while maintaining what for many applications may be
the most significant portion of the information that the original matrix clock
carries. In order to illustrate the new clock's applicability, we demonstrate
its use in the monitoring of certain resource-sharing computations
Data refinement for true concurrency
The majority of modern systems exhibit sophisticated concurrent behaviour, where several system components modify and observe the system state with fine-grained atomicity. Many systems (e.g., multi-core processors, real-time controllers) also exhibit truly concurrent behaviour, where multiple events can occur simultaneously. This paper presents data refinement defined in terms of an interval-based framework, which includes high-level operators that capture non-deterministic expression evaluation. By modifying the type of an interval, our theory may be specialised to cover data refinement of both discrete and continuous systems. We present an interval-based encoding of forward simulation, then prove that our forward simulation rule is sound with respect to our data refinement definition. A number of rules for decomposing forward simulation proofs over both sequential and parallel composition are developed
On Verifying Causal Consistency
Causal consistency is one of the most adopted consistency criteria for
distributed implementations of data structures. It ensures that operations are
executed at all sites according to their causal precedence. We address the
issue of verifying automatically whether the executions of an implementation of
a data structure are causally consistent. We consider two problems: (1)
checking whether one single execution is causally consistent, which is relevant
for developing testing and bug finding algorithms, and (2) verifying whether
all the executions of an implementation are causally consistent.
We show that the first problem is NP-complete. This holds even for the
read-write memory abstraction, which is a building block of many modern
distributed systems. Indeed, such systems often store data in key-value stores,
which are instances of the read-write memory abstraction. Moreover, we prove
that, surprisingly, the second problem is undecidable, and again this holds
even for the read-write memory abstraction. However, we show that for the
read-write memory abstraction, these negative results can be circumvented if
the implementations are data independent, i.e., their behaviors do not depend
on the data values that are written or read at each moment, which is a
realistic assumption.Comment: extended version of POPL 201
The Variety of Variables in Automated Real-Time Refinement
The refinement calculus is a well-established theory for deriving program code from specifications. Recent research has extended the theory to handle timing requirements, as well as functional ones, and we have developed an interactive programming tool based on these extensions. Through a number of case studies completed using the tool, this paper explains how the tool helps the programmer by supporting the many forms of variables needed in the theory. These include simple state variables as in the untimed calculus, trace variables that model the evolution of properties over time, auxiliary variables that exist only to support formal reasoning, subroutine parameters, and variables shared between parallel processes
Intrusion detection in distributed systems, an approach based on taint marking
International audienceThis paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e. sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level
- …
