760,254 research outputs found

    Fast Distributed PageRank Computation

    Full text link
    Over the last decade, PageRank has gained importance in a wide range of applications and domains, ever since it first proved to be effective in determining node importance in large graphs (and was a pioneering idea behind Google's search engine). In distributed computing alone, PageRank vector, or more generally random walk based quantities have been used for several different applications ranging from determining important nodes, load balancing, search, and identifying connectivity structures. Surprisingly, however, there has been little work towards designing provably efficient fully-distributed algorithms for computing PageRank. The difficulty is that traditional matrix-vector multiplication style iterative methods may not always adapt well to the distributed setting owing to communication bandwidth restrictions and convergence rates. In this paper, we present fast random walk-based distributed algorithms for computing PageRanks in general graphs and prove strong bounds on the round complexity. We first present a distributed algorithm that takes O\big(\log n/\eps \big) rounds with high probability on any graph (directed or undirected), where nn is the network size and \eps is the reset probability used in the PageRank computation (typically \eps is a fixed constant). We then present a faster algorithm that takes O\big(\sqrt{\log n}/\eps \big) rounds in undirected graphs. Both of the above algorithms are scalable, as each node sends only small (\polylog n) number of bits over each edge per round. To the best of our knowledge, these are the first fully distributed algorithms for computing PageRank vector with provably efficient running time.Comment: 14 page

    Distributed Computation as Hierarchy

    Full text link
    This paper presents a new distributed computational model of distributed systems called the phase web that extends V. Pratt's orthocurrence relation from 1986. The model uses mutual-exclusion to express sequence, and a new kind of hierarchy to replace event sequences, posets, and pomsets. The model explicitly connects computation to a discrete Clifford algebra that is in turn extended into homology and co-homology, wherein the recursive nature of objects and boundaries becomes apparent and itself subject to hierarchical recursion. Topsy, a programming environment embodying the phase web, is available from www.cs.auc.dk/topsy.Comment: 16 pages, 3 figure

    Distributed anonymous discrete function computation

    Get PDF
    We propose a model for deterministic distributed function computation by a network of identical and anonymous nodes. In this model, each node has bounded computation and storage capabilities that do not grow with the network size. Furthermore, each node only knows its neighbors, not the entire graph. Our goal is to characterize the class of functions that can be computed within this model. In our main result, we provide a necessary condition for computability which we show to be nearly sufficient, in the sense that every function that satisfies this condition can at least be approximated. The problem of computing suitably rounded averages in a distributed manner plays a central role in our development; we provide an algorithm that solves it in time that grows quadratically with the size of the network

    Distributed computation of persistent homology

    Full text link
    Persistent homology is a popular and powerful tool for capturing topological features of data. Advances in algorithms for computing persistent homology have reduced the computation time drastically -- as long as the algorithm does not exhaust the available memory. Following up on a recently presented parallel method for persistence computation on shared memory systems, we demonstrate that a simple adaption of the standard reduction algorithm leads to a variant for distributed systems. Our algorithmic design ensures that the data is distributed over the nodes without redundancy; this permits the computation of much larger instances than on a single machine. Moreover, we observe that the parallelism at least compensates for the overhead caused by communication between nodes, and often even speeds up the computation compared to sequential and even parallel shared memory algorithms. In our experiments, we were able to compute the persistent homology of filtrations with more than a billion (10^9) elements within seconds on a cluster with 32 nodes using less than 10GB of memory per node

    Distributed Function Computation with Confidentiality

    Full text link
    A set of terminals observe correlated data and seek to compute functions of the data using interactive public communication. At the same time, it is required that the value of a private function of the data remains concealed from an eavesdropper observing this communication. In general, the private function and the functions computed by the nodes can be all different. We show that a class of functions are securely computable if and only if the conditional entropy of data given the value of private function is greater than the least rate of interactive communication required for a related multiterminal source-coding task. A single-letter formula is provided for this rate in special cases.Comment: To Appear in IEEE JSAC: In-Network Computation: Exploring the Fundamental Limits, April 201

    Distributed measurement-based quantum computation

    Full text link
    We develop a formal model for distributed measurement-based quantum computations, adopting an agent-based view, such that computations are described locally where possible. Because the network quantum state is in general entangled, we need to model it as a global structure, reminiscent of global memory in classical agent systems. Local quantum computations are described as measurement patterns. Since measurement-based quantum computation is inherently distributed, this allows us to extend naturally several concepts of the measurement calculus, a formal model for such computations. Our goal is to define an assembly language, i.e. we assume that computations are well-defined and we do not concern ourselves with verification techniques. The operational semantics for systems of agents is given by a probabilistic transition system, and we define operational equivalence in a way that it corresponds to the notion of bisimilarity. With this in place, we prove that teleportation is bisimilar to a direct quantum channel, and this also within the context of larger networks.Comment: 17 page

    On Distributed Computation in Noisy Random Planar Networks

    Full text link
    We consider distributed computation of functions of distributed data in random planar networks with noisy wireless links. We present a new algorithm for computation of the maximum value which is order optimal in the number of transmissions and computation time.We also adapt the histogram computation algorithm of Ying et al to make the histogram computation time optimal.Comment: 5 pages, 2 figure
    corecore