3,453 research outputs found
Organic Design of Massively Distributed Systems: A Complex Networks Perspective
The vision of Organic Computing addresses challenges that arise in the design
of future information systems that are comprised of numerous, heterogeneous,
resource-constrained and error-prone components or devices. Here, the notion
organic particularly highlights the idea that, in order to be manageable, such
systems should exhibit self-organization, self-adaptation and self-healing
characteristics similar to those of biological systems. In recent years, the
principles underlying many of the interesting characteristics of natural
systems have been investigated from the perspective of complex systems science,
particularly using the conceptual framework of statistical physics and
statistical mechanics. In this article, we review some of the interesting
relations between statistical physics and networked systems and discuss
applications in the engineering of organic networked computing systems with
predictable, quantifiable and controllable self-* properties.Comment: 17 pages, 14 figures, preprint of submission to Informatik-Spektrum
published by Springe
Distributed execution of bigraphical reactive systems
The bigraph embedding problem is crucial for many results and tools about
bigraphs and bigraphical reactive systems (BRS). Current algorithms for
computing bigraphical embeddings are centralized, i.e. designed to run locally
with a complete view of the guest and host bigraphs. In order to deal with
large bigraphs, and to parallelize reactions, we present a decentralized
algorithm, which distributes both state and computation over several concurrent
processes. This allows for distributed, parallel simulations where
non-interfering reactions can be carried out concurrently; nevertheless, even
in the worst case the complexity of this distributed algorithm is no worse than
that of a centralized algorithm
Multicast in DKS(N, k, f) Overlay Networks
Recent developments in the area of peer-to-peer computing show that structured overlay networks implementing distributed hash tables scale well and can serve as infrastructures for Internet scale applications. We are developing a family of infrastructures, DKS(N; k; f), for the construction of peer-to-peer applications. An instance of DKS(N; k; f) is an overlay network that implements a distributed hash table and which has a number of desirable properties: low cost of communication, scalability, logarithmic lookup length, fault-tolerance and strong guarantees of locating any data item that was inserted in the system. In this paper, we show how multicast is achieved in DKS(N, k, f) overlay networks. The design presented here is attractive in three main respects. First, members of a multicast group self-organize in an instance of DKS(N, k, f) in a way that allows co-existence of groups of different sizes, degree of fault-tolerance, and maintenance cost, thereby, providing flexibility. Second, each member of a group can multicast, rather than having single source multicast. Third, within a group, dissemination of a multicast message is optimal under normal system operation in the sense that there are no redundant messages despite the presence of outdated routing information
Resource-Aware Multimedia Content Delivery: A Gambling Approach
In this paper, we propose a resource-aware solution to achieving reliable and scalable stream diffusion in a probabilistic model, i.e. where communication links and processes are subject to message losses and crashes, respectively. Our solution is resource-aware in the sense that it limits the memory consumption, by strictly scoping the knowledge each process has about the system, and the bandwidth available to each process, by assigning a fixed quota of messages to each process. We describe our approach as gambling in the sense that it consists in accepting to give up on a few processes sometimes, in the hope of better serving all processes most of the time. That is, our solution deliberately takes the risk not to reach some processes in some executions, in order to reach every process in most executions. The underlying stream diffusion algorithm is based on a tree-construction technique that dynamically distributes the load of forwarding stream packets among processes, based on their respective available bandwidths. Simulations show that this approach pays off when compared to traditional gossiping, when the latter faces identical bandwidth constraint
Resource-Aware Multimedia Content Delivery: A Gambling Approach
In this paper, we propose a resource-aware solution to achieving reliable and scalable stream diffusion in a probabilistic model, i.e. where communication links and processes are subject to message losses and crashes, respectively. Our solution is resource-aware in the sense that it limits the memory consumption, by strictly scoping the knowledge each process has about the system, and the bandwidth available to each process, by assigning a fixed quota of messages to each process. We describe our approach as gambling in the sense that it consists in accepting to give up on a few processes sometimes, in the hope of better serving all processes most of the time. That is, our solution deliberately takes the risk not to reach some processes in some executions, in order to reach every process in most executions. The underlying stream diffusion algorithm is based on a tree-construction technique that dynamically distributes the load of forwarding stream packets among processes, based on their respective available bandwidths. Simulations show that this approach pays off when compared to traditional gossiping, when the latter faces identical bandwidth constraint
- …