10,967 research outputs found
Distributed Creation and Adaptation of Random Scale-Free Overlay Networks
Abstract-Random scale-free overlay topologies provide a number of properties like for example high resilience against failures of random nodes, small (average) diameter as well as good expansion and congestion characteristics that make them interesting for the use in large-scale distributed systems. A number of these properties have been shown to be influenced by the exponent of their power law degree distribution. In this article, we present a distributed rewiring scheme that is suitable to effectuate random scale-free overlay topologies with an adjustable degree distribution exponent. The scheme uses a biased random walk strategy to sample new endpoints of edges being rewired and relies on an equilibrium model for scale-free networks. The bias of the random walk strategy can be tuned to produce random scale-free networks with arbitrary degree distribution exponents greater than two. We argue that the rewiring strategy can be implemented in a distributed fashion based on a node's information about its immediate neighbors. We present both analytical arguments as well as results that have been obtained in simulations of the proposed protocol. Keywords-scale-free; overlay networks; adaptation; selforganization; Peer-to-Peer; During the last decade, the increasing spread and importance of large-scale Peer-to-Peer systems has raised significant research interest in the design and analysis of robust and efficient overlay networks. In this research, structured and unstructured approaches can be distinguished. Mimicking the use of data structures in traditional computing, highly structured overlay topologies facilitate the use of efficient distributed algorithms with deterministic performance. However, the overhead entailed by the construction and maintenance of such deterministically structured topologies questions their usability in large-scale scenarios with dynamic and potentially faulty participants. Constituting a different approach, unstructured overlay networks do not impose constraints about the detailed structure of the emerging network topology. Rather than using costly and potentially complex routines for building and maintaining sophisticated network structures, in such unstructured overlays links can arise in a seemingly random and uncoordinated fashion. They are thus particularly suitable for highly dynamic scenarios in which the operational overhead entailed by structured approaches can possibly dominate a system's overall performance. While the use of unstructured overlays can reduce construction and maintenance overhead, designing efficient distributed algorithms with predictable performance is hardly possible when making no assumptions whatsoever about an overlay's structure. Interestingly, based on a stochastic model of the system in question and arguments from random graph theory and complex network science, it is often possible to reason about structural properties of the resulting network topology that hold almost surely in the limit of large systems. Similarly, the performance of a number of dynamical processes -many of them relevant to distributed computing systems -has been studied in random network structures. For sufficiently large systems, based on randomized overlay topologies one can thus obtain strong, though probabilistic guarantees about their structure and performance. Considering the classical taxonomy of deterministically structured and completely unstructured overlay networks, this suggests an intermediate class of probabilistically structured topologies that promises to combine the benefits of both. During the last decade, much of the work in the field of random networks has been focused around scale-free networks that are characterized by a power law degree distribution P (k) â k âÎł . The fact that networks with such scale-free characteristics seem to emerge naturally in a variety of natural, social and technological contexts has awakened the interest of researchers in disciplines as diverse as mathematics, statistical physics, biology, sociology, and computing. It has since been shown that scale-free networks provide a number of interesting properties like a remarkable robustness against random failures Based on this observation, during the last couple of years, the performance of distributed algorithms operating in scale-free networks has been studied
Organic Design of Massively Distributed Systems: A Complex Networks Perspective
The vision of Organic Computing addresses challenges that arise in the design
of future information systems that are comprised of numerous, heterogeneous,
resource-constrained and error-prone components or devices. Here, the notion
organic particularly highlights the idea that, in order to be manageable, such
systems should exhibit self-organization, self-adaptation and self-healing
characteristics similar to those of biological systems. In recent years, the
principles underlying many of the interesting characteristics of natural
systems have been investigated from the perspective of complex systems science,
particularly using the conceptual framework of statistical physics and
statistical mechanics. In this article, we review some of the interesting
relations between statistical physics and networked systems and discuss
applications in the engineering of organic networked computing systems with
predictable, quantifiable and controllable self-* properties.Comment: 17 pages, 14 figures, preprint of submission to Informatik-Spektrum
published by Springe
On the Topology Maintenance of Dynamic P2P Overlays through Self-Healing Local Interactions
This paper deals with the use of self-organizing protocols to improve the
reliability of dynamic Peer-to-Peer (P2P) overlay networks. We present two
approaches, that employ local knowledge of the 2nd neighborhood of nodes. The
first scheme is a simple protocol requiring interactions among nodes and their
direct neighbors. The second scheme extends this approach by resorting to the
Edge Clustering Coefficient (ECC), a local measure that allows to identify
those edges that connect different clusters in an overlay. A simulation
assessment is presented, which evaluates these protocols over uniform networks,
clustered networks and scale-free networks. Different failure modes are
considered. Results demonstrate the viability of the proposal.Comment: A revised version of the paper appears in Proc. of the IFIP
Networking 2014 Conference, IEEE, Trondheim, (Norway), June 201
Self-Healing Protocols for Connectivity Maintenance in Unstructured Overlays
In this paper, we discuss on the use of self-organizing protocols to improve
the reliability of dynamic Peer-to-Peer (P2P) overlay networks. Two similar
approaches are studied, which are based on local knowledge of the nodes' 2nd
neighborhood. The first scheme is a simple protocol requiring interactions
among nodes and their direct neighbors. The second scheme adds a check on the
Edge Clustering Coefficient (ECC), a local measure that allows determining
edges connecting different clusters in the network. The performed simulation
assessment evaluates these protocols over uniform networks, clustered networks
and scale-free networks. Different failure modes are considered. Results
demonstrate the effectiveness of the proposal.Comment: The paper has been accepted to the journal Peer-to-Peer Networking
and Applications. The final publication is available at Springer via
http://dx.doi.org/10.1007/s12083-015-0384-
Clustering Algorithms for Scale-free Networks and Applications to Cloud Resource Management
In this paper we introduce algorithms for the construction of scale-free
networks and for clustering around the nerve centers, nodes with a high
connectivity in a scale-free networks. We argue that such overlay networks
could support self-organization in a complex system like a cloud computing
infrastructure and allow the implementation of optimal resource management
policies.Comment: 14 pages, 8 Figurs, Journa
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Mesmerizer: A Effective Tool for a Complete Peer-to-Peer Software Development Life-cycle
In this paper we present what are, in our experience, the best
practices in Peer-To-Peer(P2P) application development and
how we combined them in a middleware platform called Mesmerizer. We explain how simulation is an integral part of
the development process and not just an assessment tool.
We then present our component-based event-driven framework for P2P application development, which can be used
to execute multiple instances of the same application in a
strictly controlled manner over an emulated network layer
for simulation/testing, or a single application in a concurrent
environment for deployment purpose. We highlight modeling aspects that are of critical importance for designing and
testing P2P applications, e.g. the emulation of Network Address Translation and bandwidth dynamics. We show how
our simulator scales when emulating low-level bandwidth
characteristics of thousands of concurrent peers while preserving a good degree of accuracy compared to a packet-level
simulator
Spectra: Robust Estimation of Distribution Functions in Networks
Distributed aggregation allows the derivation of a given global aggregate
property from many individual local values in nodes of an interconnected
network system. Simple aggregates such as minima/maxima, counts, sums and
averages have been thoroughly studied in the past and are important tools for
distributed algorithms and network coordination. Nonetheless, this kind of
aggregates may not be comprehensive enough to characterize biased data
distributions or when in presence of outliers, making the case for richer
estimates of the values on the network. This work presents Spectra, a
distributed algorithm for the estimation of distribution functions over large
scale networks. The estimate is available at all nodes and the technique
depicts important properties, namely: robust when exposed to high levels of
message loss, fast convergence speed and fine precision in the estimate. It can
also dynamically cope with changes of the sampled local property, not requiring
algorithm restarts, and is highly resilient to node churn. The proposed
approach is experimentally evaluated and contrasted to a competing state of the
art distribution aggregation technique.Comment: Full version of the paper published at 12th IFIP International
Conference on Distributed Applications and Interoperable Systems (DAIS),
Stockholm (Sweden), June 201
Organic Design of Massively Distributed Systems: A Complex Networks Perspective
The vision of Organic Computing addresses challenges that arise in the design of future information systems that are comprised of numerous, heterogeneous, resource-constrained and error-prone components. The notion organic highlights the idea that, in order to be manageable, such systems should exhibit self-organization, self-adaptation and self-healing characteristics similar to those of biological systems. In recent years, the principles underlying these characteristics are increasingly being investigated from the perspective of complex systems science, particularly using the conceptual framework of statistical physics and statistical mechanics. In this article, we review some of the interesting relations between statistical physics and networked systems and discuss applications in the engineering of organic overlay networks with predictable macroscopic propertie
- âŠ