5,584 research outputs found

    NextBestOnce: Achieving Polylog Routing despite Non-greedy Embeddings

    Full text link
    Social Overlays suffer from high message delivery delays due to insufficient routing strategies. Limiting connections to device pairs that are owned by individuals with a mutual trust relationship in real life, they form topologies restricted to a subgraph of the social network of their users. While centralized, highly successful social networking services entail a complete privacy loss of their users, Social Overlays at higher performance represent an ideal private and censorship-resistant communication substrate for the same purpose. Routing in such restricted topologies is facilitated by embedding the social graph into a metric space. Decentralized routing algorithms have up to date mainly been analyzed under the assumption of a perfect lattice structure. However, currently deployed embedding algorithms for privacy-preserving Social Overlays cannot achieve a sufficiently accurate embedding and hence conventional routing algorithms fail. Developing Social Overlays with acceptable performance hence requires better models and enhanced algorithms, which guarantee convergence in the presence of local optima with regard to the distance to the target. We suggest a model for Social Overlays that includes inaccurate embeddings and arbitrary degree distributions. We further propose NextBestOnce, a routing algorithm that can achieve polylog routing length despite local optima. We provide analytical bounds on the performance of NextBestOnce assuming a scale-free degree distribution, and furthermore show that its performance can be improved by more than a constant factor when including Neighbor-of-Neighbor information in the routing decisions.Comment: 23 pages, 2 figure

    Distributed Connectivity Decomposition

    Full text link
    We present time-efficient distributed algorithms for decomposing graphs with large edge or vertex connectivity into multiple spanning or dominating trees, respectively. As their primary applications, these decompositions allow us to achieve information flow with size close to the connectivity by parallelizing it along the trees. More specifically, our distributed decomposition algorithms are as follows: (I) A decomposition of each undirected graph with vertex-connectivity kk into (fractionally) vertex-disjoint weighted dominating trees with total weight Ω(klogn)\Omega(\frac{k}{\log n}), in O~(D+n)\widetilde{O}(D+\sqrt{n}) rounds. (II) A decomposition of each undirected graph with edge-connectivity λ\lambda into (fractionally) edge-disjoint weighted spanning trees with total weight λ12(1ε)\lceil\frac{\lambda-1}{2}\rceil(1-\varepsilon), in O~(D+nλ)\widetilde{O}(D+\sqrt{n\lambda}) rounds. We also show round complexity lower bounds of Ω~(D+nk)\tilde{\Omega}(D+\sqrt{\frac{n}{k}}) and Ω~(D+nλ)\tilde{\Omega}(D+\sqrt{\frac{n}{\lambda}}) for the above two decompositions, using techniques of [Das Sarma et al., STOC'11]. Moreover, our vertex-connectivity decomposition extends to centralized algorithms and improves the time complexity of [Censor-Hillel et al., SODA'14] from O(n3)O(n^3) to near-optimal O~(m)\tilde{O}(m). As corollaries, we also get distributed oblivious routing broadcast with O(1)O(1)-competitive edge-congestion and O(logn)O(\log n)-competitive vertex-congestion. Furthermore, the vertex connectivity decomposition leads to near-time-optimal O(logn)O(\log n)-approximation of vertex connectivity: centralized O~(m)\widetilde{O}(m) and distributed O~(D+n)\tilde{O}(D+\sqrt{n}). The former moves toward the 1974 conjecture of Aho, Hopcroft, and Ullman postulating an O(m)O(m) centralized exact algorithm while the latter is the first distributed vertex connectivity approximation

    Tiny Groups Tackle Byzantine Adversaries

    Full text link
    A popular technique for tolerating malicious faults in open distributed systems is to establish small groups of participants, each of which has a non-faulty majority. These groups are used as building blocks to design attack-resistant algorithms. Despite over a decade of active research, current constructions require group sizes of O(logn)O(\log n), where nn is the number of participants in the system. This group size is important since communication and state costs scale polynomially with this parameter. Given the stubbornness of this logarithmic barrier, a natural question is whether better bounds are possible. Here, we consider an attacker that controls a constant fraction of the total computational resources in the system. By leveraging proof-of-work (PoW), we demonstrate how to reduce the group size exponentially to O(loglogn)O(\log\log n) while maintaining strong security guarantees. This reduction in group size yields a significant improvement in communication and state costs.Comment: This work is supported by the National Science Foundation grant CCF 1613772 and a C Spire Research Gif

    Stochastic Analysis of a Churn-Tolerant Structured Peer-to-Peer Scheme

    Full text link
    We present and analyze a simple and general scheme to build a churn (fault)-tolerant structured Peer-to-Peer (P2P) network. Our scheme shows how to "convert" a static network into a dynamic distributed hash table(DHT)-based P2P network such that all the good properties of the static network are guaranteed with high probability (w.h.p). Applying our scheme to a cube-connected cycles network, for example, yields a O(logN)O(\log N) degree connected network, in which every search succeeds in O(logN)O(\log N) hops w.h.p., using O(logN)O(\log N) messages, where NN is the expected stable network size. Our scheme has an constant storage overhead (the number of nodes responsible for servicing a data item) and an O(logN)O(\log N) overhead (messages and time) per insertion and essentially no overhead for deletions. All these bounds are essentially optimal. While DHT schemes with similar guarantees are already known in the literature, this work is new in the following aspects: (1) It presents a rigorous mathematical analysis of the scheme under a general stochastic model of churn and shows the above guarantees; (2) The theoretical analysis is complemented by a simulation-based analysis that validates the asymptotic bounds even in moderately sized networks and also studies performance under changing stable network size; (3) The presented scheme seems especially suitable for maintaining dynamic structures under churn efficiently. In particular, we show that a spanning tree of low diameter can be efficiently maintained in constant time and logarithmic number of messages per insertion or deletion w.h.p. Keywords: P2P Network, DHT Scheme, Churn, Dynamic Spanning Tree, Stochastic Analysis
    corecore