5,257 research outputs found

    Stochastic Analysis of a Churn-Tolerant Structured Peer-to-Peer Scheme

    Full text link
    We present and analyze a simple and general scheme to build a churn (fault)-tolerant structured Peer-to-Peer (P2P) network. Our scheme shows how to "convert" a static network into a dynamic distributed hash table(DHT)-based P2P network such that all the good properties of the static network are guaranteed with high probability (w.h.p). Applying our scheme to a cube-connected cycles network, for example, yields a O(logN)O(\log N) degree connected network, in which every search succeeds in O(logN)O(\log N) hops w.h.p., using O(logN)O(\log N) messages, where NN is the expected stable network size. Our scheme has an constant storage overhead (the number of nodes responsible for servicing a data item) and an O(logN)O(\log N) overhead (messages and time) per insertion and essentially no overhead for deletions. All these bounds are essentially optimal. While DHT schemes with similar guarantees are already known in the literature, this work is new in the following aspects: (1) It presents a rigorous mathematical analysis of the scheme under a general stochastic model of churn and shows the above guarantees; (2) The theoretical analysis is complemented by a simulation-based analysis that validates the asymptotic bounds even in moderately sized networks and also studies performance under changing stable network size; (3) The presented scheme seems especially suitable for maintaining dynamic structures under churn efficiently. In particular, we show that a spanning tree of low diameter can be efficiently maintained in constant time and logarithmic number of messages per insertion or deletion w.h.p. Keywords: P2P Network, DHT Scheme, Churn, Dynamic Spanning Tree, Stochastic Analysis

    Multicast in DKS(N, k, f) Overlay Networks

    Get PDF
    Recent developments in the area of peer-to-peer computing show that structured overlay networks implementing distributed hash tables scale well and can serve as infrastructures for Internet scale applications. We are developing a family of infrastructures, DKS(N; k; f), for the construction of peer-to-peer applications. An instance of DKS(N; k; f) is an overlay network that implements a distributed hash table and which has a number of desirable properties: low cost of communication, scalability, logarithmic lookup length, fault-tolerance and strong guarantees of locating any data item that was inserted in the system. In this paper, we show how multicast is achieved in DKS(N, k, f) overlay networks. The design presented here is attractive in three main respects. First, members of a multicast group self-organize in an instance of DKS(N, k, f) in a way that allows co-existence of groups of different sizes, degree of fault-tolerance, and maintenance cost, thereby, providing flexibility. Second, each member of a group can multicast, rather than having single source multicast. Third, within a group, dissemination of a multicast message is optimal under normal system operation in the sense that there are no redundant messages despite the presence of outdated routing information

    Knowledge is at the Edge! How to Search in Distributed Machine Learning Models

    Full text link
    With the advent of the Internet of Things and Industry 4.0 an enormous amount of data is produced at the edge of the network. Due to a lack of computing power, this data is currently send to the cloud where centralized machine learning models are trained to derive higher level knowledge. With the recent development of specialized machine learning hardware for mobile devices, a new era of distributed learning is about to begin that raises a new research question: How can we search in distributed machine learning models? Machine learning at the edge of the network has many benefits, such as low-latency inference and increased privacy. Such distributed machine learning models can also learn personalized for a human user, a specific context, or application scenario. As training data stays on the devices, control over possibly sensitive data is preserved as it is not shared with a third party. This new form of distributed learning leads to the partitioning of knowledge between many devices which makes access difficult. In this paper we tackle the problem of finding specific knowledge by forwarding a search request (query) to a device that can answer it best. To that end, we use a entropy based quality metric that takes the context of a query and the learning quality of a device into account. We show that our forwarding strategy can achieve over 95% accuracy in a urban mobility scenario where we use data from 30 000 people commuting in the city of Trento, Italy.Comment: Published in CoopIS 201
    corecore