88,176 research outputs found

    Distributed Computing on Core-Periphery Networks: Axiom-based Design

    Full text link
    Inspired by social networks and complex systems, we propose a core-periphery network architecture that supports fast computation for many distributed algorithms and is robust and efficient in number of links. Rather than providing a concrete network model, we take an axiom-based design approach. We provide three intuitive (and independent) algorithmic axioms and prove that any network that satisfies all axioms enjoys an efficient algorithm for a range of tasks (e.g., MST, sparse matrix multiplication, etc.). We also show the minimality of our axiom set: for networks that satisfy any subset of the axioms, the same efficiency cannot be guaranteed for any deterministic algorithm

    Distributed Edge Connectivity in Sublinear Time

    Full text link
    We present the first sublinear-time algorithm for a distributed message-passing network sto compute its edge connectivity λ\lambda exactly in the CONGEST model, as long as there are no parallel edges. Our algorithm takes O~(n11/353D1/353+n11/706)\tilde O(n^{1-1/353}D^{1/353}+n^{1-1/706}) time to compute λ\lambda and a cut of cardinality λ\lambda with high probability, where nn and DD are the number of nodes and the diameter of the network, respectively, and O~\tilde O hides polylogarithmic factors. This running time is sublinear in nn (i.e. O~(n1ϵ)\tilde O(n^{1-\epsilon})) whenever DD is. Previous sublinear-time distributed algorithms can solve this problem either (i) exactly only when λ=O(n1/8ϵ)\lambda=O(n^{1/8-\epsilon}) [Thurimella PODC'95; Pritchard, Thurimella, ACM Trans. Algorithms'11; Nanongkai, Su, DISC'14] or (ii) approximately [Ghaffari, Kuhn, DISC'13; Nanongkai, Su, DISC'14]. To achieve this we develop and combine several new techniques. First, we design the first distributed algorithm that can compute a kk-edge connectivity certificate for any k=O(n1ϵ)k=O(n^{1-\epsilon}) in time O~(nk+D)\tilde O(\sqrt{nk}+D). Second, we show that by combining the recent distributed expander decomposition technique of [Chang, Pettie, Zhang, SODA'19] with techniques from the sequential deterministic edge connectivity algorithm of [Kawarabayashi, Thorup, STOC'15], we can decompose the network into a sublinear number of clusters with small average diameter and without any mincut separating a cluster (except the `trivial' ones). Finally, by extending the tree packing technique from [Karger STOC'96], we can find the minimum cut in time proportional to the number of components. As a byproduct of this technique, we obtain an O~(n)\tilde O(n)-time algorithm for computing exact minimum cut for weighted graphs.Comment: Accepted at 51st ACM Symposium on Theory of Computing (STOC 2019

    Parallel and Distributed Algorithms for the Housing Allocation Problem

    Get PDF
    We give parallel and distributed algorithms for the housing allocation problem. In this problem, there is a set of agents and a set of houses. Each agent has a strict preference list for a subset of houses. We need to find a matching such that some criterion is optimized. One such criterion is Pareto Optimality. A matching is Pareto optimal if no coalition of agents can be strictly better off by exchanging houses among themselves. We also study the housing market problem, a variant of the housing allocation problem, where each agent initially owns a house. In addition to Pareto optimality, we are also interested in finding the core of a housing market. A matching is in the core if there is no coalition of agents that can be better off by breaking away from other agents and switching houses only among themselves. In the first part of this work, we show that computing a Pareto optimal matching of a house allocation is in {\bf CC} and computing the core of a housing market is {\bf CC}-hard. Given a matching, we also show that verifying whether it is in the core can be done in {\bf NC}. We then give an algorithm to show that computing a maximum Pareto optimal matching for the housing allocation problem is in {\bf RNC}^2 and quasi-{\bf NC}^2. In the second part of this work, we present a distributed version of the top trading cycle algorithm for finding the core of a housing market. To that end, we first present two algorithms for finding all the disjoint cycles in a functional graph: a Las Vegas algorithm which terminates in O(logl)O(\log l) rounds with high probability, where ll is the length of the longest cycle, and a deterministic algorithm which terminates in O(lognlogl)O(\log^* n \log l) rounds, where nn is the number of nodes in the graph. Both algorithms work in the synchronous distributed model and use messages of size O(logn)O(\log n)

    Asymptotically Faster Quantum Distributed Algorithms for Approximate Steiner Trees and Directed Minimum Spanning Trees

    Full text link
    The CONGEST and CONGEST-CLIQUE models have been carefully studied to represent situations where the communication bandwidth between processors in a network is severely limited. Messages of only O(log(n))O(log(n)) bits of information each may be sent between processors in each round. The quantum versions of these models allow the processors instead to communicate and compute with quantum bits under the same bandwidth limitations. This leads to the following natural research question: What problems can be solved more efficiently in these quantum models than in the classical ones? Building on existing work, we contribute to this question in two ways. Firstly, we present two algorithms in the Quantum CONGEST-CLIQUE model of distributed computation that succeed with high probability; one for producing an approximately optimal Steiner Tree, and one for producing an exact directed minimum spanning tree, each of which uses O~(n1/4)\tilde{O}(n^{1/4}) rounds of communication and O~(n9/4)\tilde{O}(n^{9/4}) messages, where nn is the number of nodes in the network. The algorithms thus achieve a lower asymptotic round and message complexity than any known algorithms in the classical CONGEST-CLIQUE model. At a high level, we achieve these results by combining classical algorithmic frameworks with quantum subroutines. An existing framework for using distributed version of Grover's search algorithm to accelerate triangle finding lies at the core of the asymptotic speedup. Secondly, we carefully characterize the constants and logarithmic factors involved in our algorithms as well as related algorithms, otherwise commonly obscured by O~\tilde{O} notation. The analysis shows that some improvements are needed to render both our and existing related quantum and classical algorithms practical, as their asymptotic speedups only help for very large values of nn.Comment: 23 pages, 0 figure

    A Distributed Algorithm for Directed Minimum-Weight Spanning Tree

    Get PDF

    Storage and Search in Dynamic Peer-to-Peer Networks

    Full text link
    We study robust and efficient distributed algorithms for searching, storing, and maintaining data in dynamic Peer-to-Peer (P2P) networks. P2P networks are highly dynamic networks that experience heavy node churn (i.e., nodes join and leave the network continuously over time). Our goal is to guarantee, despite high node churn rate, that a large number of nodes in the network can store, retrieve, and maintain a large number of data items. Our main contributions are fast randomized distributed algorithms that guarantee the above with high probability (whp) even under high adversarial churn: 1. A randomized distributed search algorithm that (whp) guarantees that searches from as many as no(n)n - o(n) nodes (nn is the stable network size) succeed in O(logn){O}(\log n)-rounds despite O(n/log1+δn){O}(n/\log^{1+\delta} n) churn, for any small constant δ>0\delta > 0, per round. We assume that the churn is controlled by an oblivious adversary (that has complete knowledge and control of what nodes join and leave and at what time, but is oblivious to the random choices made by the algorithm). 2. A storage and maintenance algorithm that guarantees (whp) data items can be efficiently stored (with only Θ(logn)\Theta(\log{n}) copies of each data item) and maintained in a dynamic P2P network with churn rate up to O(n/log1+δn){O}(n/\log^{1+\delta} n) per round. Our search algorithm together with our storage and maintenance algorithm guarantees that as many as no(n)n - o(n) nodes can efficiently store, maintain, and search even under O(n/log1+δn){O}(n/\log^{1+\delta} n) churn per round. Our algorithms require only polylogarithmic in nn bits to be processed and sent (per round) by each node. To the best of our knowledge, our algorithms are the first-known, fully-distributed storage and search algorithms that provably work under highly dynamic settings (i.e., high churn rates per step).Comment: to appear at SPAA 201

    A Review of the Energy Efficient and Secure Multicast Routing Protocols for Mobile Ad hoc Networks

    Full text link
    This paper presents a thorough survey of recent work addressing energy efficient multicast routing protocols and secure multicast routing protocols in Mobile Ad hoc Networks (MANETs). There are so many issues and solutions which witness the need of energy management and security in ad hoc wireless networks. The objective of a multicast routing protocol for MANETs is to support the propagation of data from a sender to all the receivers of a multicast group while trying to use the available bandwidth efficiently in the presence of frequent topology changes. Multicasting can improve the efficiency of the wireless link when sending multiple copies of messages by exploiting the inherent broadcast property of wireless transmission. Secure multicast routing plays a significant role in MANETs. However, offering energy efficient and secure multicast routing is a difficult and challenging task. In recent years, various multicast routing protocols have been proposed for MANETs. These protocols have distinguishing features and use different mechanismsComment: 15 page
    corecore