12,546 research outputs found

    Vitis: A Gossip-based Hybrid Overlay for Internet-scale Publish/Subscribe

    Get PDF
    Peer-to-peer overlay networks are attractive solutions for building Internet-scale publish/subscribe systems. However, scalability comes with a cost: a message published on a certain topic often needs to traverse a large number of uninterested (unsubscribed) nodes before reaching all its subscribers. This might sharply increase resource consumption for such relay nodes (in terms of bandwidth transmission cost, CPU, etc) and could ultimately lead to rapid deterioration of the system’s performance once the relay nodes start dropping the messages or choose to permanently abandon the system. In this paper, we introduce Vitis, a gossip-based publish/subscribe system that significantly decreases the number of relay messages, and scales to an unbounded number of nodes and topics. This is achieved by the novel approach of enabling rendezvous routing on unstructured overlays. We construct a hybrid system by injecting structure into an otherwise unstructured network. The resulting structure resembles a navigable small-world network, which spans along clusters of nodes that have similar subscriptions. The properties of such an overlay make it an ideal platform for efficient data dissemination in large-scale systems. We perform extensive simulations and evaluate Vitis by comparing its performance against two base-line publish/subscribe systems: one that is oblivious to node subscriptions, and another that exploits the subscription similarities. Our measurements show that Vitis significantly outperforms the base-line solutions on various subscription and churn scenarios, from both synthetic models and real-world traces

    Self-Healing Protocols for Connectivity Maintenance in Unstructured Overlays

    Full text link
    In this paper, we discuss on the use of self-organizing protocols to improve the reliability of dynamic Peer-to-Peer (P2P) overlay networks. Two similar approaches are studied, which are based on local knowledge of the nodes' 2nd neighborhood. The first scheme is a simple protocol requiring interactions among nodes and their direct neighbors. The second scheme adds a check on the Edge Clustering Coefficient (ECC), a local measure that allows determining edges connecting different clusters in the network. The performed simulation assessment evaluates these protocols over uniform networks, clustered networks and scale-free networks. Different failure modes are considered. Results demonstrate the effectiveness of the proposal.Comment: The paper has been accepted to the journal Peer-to-Peer Networking and Applications. The final publication is available at Springer via http://dx.doi.org/10.1007/s12083-015-0384-

    Highly intensive data dissemination in complex networks

    Full text link
    This paper presents a study on data dissemination in unstructured Peer-to-Peer (P2P) network overlays. The absence of a structure in unstructured overlays eases the network management, at the cost of non-optimal mechanisms to spread messages in the network. Thus, dissemination schemes must be employed that allow covering a large portion of the network with a high probability (e.g.~gossip based approaches). We identify principal metrics, provide a theoretical model and perform the assessment evaluation using a high performance simulator that is based on a parallel and distributed architecture. A main point of this study is that our simulation model considers implementation technical details, such as the use of caching and Time To Live (TTL) in message dissemination, that are usually neglected in simulations, due to the additional overhead they cause. Outcomes confirm that these technical details have an important influence on the performance of dissemination schemes and that the studied schemes are quite effective to spread information in P2P overlay networks, whatever their topology. Moreover, the practical usage of such dissemination mechanisms requires a fine tuning of many parameters, the choice between different network topologies and the assessment of behaviors such as free riding. All this can be done only using efficient simulation tools to support both the network design phase and, in some cases, at runtime

    Organic Design of Massively Distributed Systems: A Complex Networks Perspective

    Full text link
    The vision of Organic Computing addresses challenges that arise in the design of future information systems that are comprised of numerous, heterogeneous, resource-constrained and error-prone components or devices. Here, the notion organic particularly highlights the idea that, in order to be manageable, such systems should exhibit self-organization, self-adaptation and self-healing characteristics similar to those of biological systems. In recent years, the principles underlying many of the interesting characteristics of natural systems have been investigated from the perspective of complex systems science, particularly using the conceptual framework of statistical physics and statistical mechanics. In this article, we review some of the interesting relations between statistical physics and networked systems and discuss applications in the engineering of organic networked computing systems with predictable, quantifiable and controllable self-* properties.Comment: 17 pages, 14 figures, preprint of submission to Informatik-Spektrum published by Springe

    A cooperative approach for distributed task execution in autonomic clouds

    Get PDF
    Virtualization and distributed computing are two key pillars that guarantee scalability of applications deployed in the Cloud. In Autonomous Cooperative Cloud-based Platforms, autonomous computing nodes cooperate to offer a PaaS Cloud for the deployment of user applications. Each node must allocate the necessary resources for customer applications to be executed with certain QoS guarantees. If the QoS of an application cannot be guaranteed a node has mainly two options: to allocate more resources (if it is possible) or to rely on the collaboration of other nodes. Making a decision is not trivial since it involves many factors (e.g. the cost of setting up virtual machines, migrating applications, discovering collaborators). In this paper we present a model of such scenarios and experimental results validating the convenience of cooperative strategies over selfish ones, where nodes do not help each other. We describe the architecture of the platform of autonomous clouds and the main features of the model, which has been implemented and evaluated in the DEUS discrete-event simulator. From the experimental evaluation, based on workload data from the Google Cloud Backend, we can conclude that (modulo our assumptions and simplifications) the performance of a volunteer cloud can be compared to that of a Google Cluster

    Network layer access control for context-aware IPv6 applications

    Get PDF
    As part of the Lancaster GUIDE II project, we have developed a novel wireless access point protocol designed to support the development of next generation mobile context-aware applications in our local environs. Once deployed, this architecture will allow ordinary citizens secure, accountable and convenient access to a set of tailored applications including location, multimedia and context based services, and the public Internet. Our architecture utilises packet marking and network level packet filtering techniques within a modified Mobile IPv6 protocol stack to perform access control over a range of wireless network technologies. In this paper, we describe the rationale for, and components of, our architecture and contrast our approach with other state-of-the- art systems. The paper also contains details of our current implementation work, including preliminary performance measurements

    Astrophysicists and physicists as creators of ArXiv-based commenting resources for their research communities. An initial survey

    Get PDF
    This paper conveys the outcomes of what results to be the first, though initial, overview of commenting platforms and related 2.0 resources born within and for the astrophysical community (from 2004 to 2016). Experiences were added, mainly in the physics domain, for a total of 22 major items, including four epijournals, and four supplementary resources, thus casting some light onto an unexpected richness and consonance of endeavours. These experiences rest almost entirely on the contents of the database ArXiv, which adds to its merits that of potentially setting the grounds for web 2.0 resources, and research behaviours, to be explored. Most of the experiences retrieved are UK and US based, but the resulting picture is international, as various European countries, China and Australia have been actively involved. Final remarks about creation patterns and outcome of these resources are outlined. The results integrate the previous studies according to which the web 2.0 is presently of limited use for communication in astrophysics and vouch for a role of researchers in the shaping of their own professional communication tools that is greater than expected. Collaterally, some aspects of ArXiv s recent pathway towards partial inclusion of web 2.0 features are touched upon. Further investigation is hoped for.Comment: Journal article 16 page
    • 

    corecore