1,039 research outputs found

    Data-Oblivious Graph Algorithms in Outsourced External Memory

    Full text link
    Motivated by privacy preservation for outsourced data, data-oblivious external memory is a computational framework where a client performs computations on data stored at a semi-trusted server in a way that does not reveal her data to the server. This approach facilitates collaboration and reliability over traditional frameworks, and it provides privacy protection, even though the server has full access to the data and he can monitor how it is accessed by the client. The challenge is that even if data is encrypted, the server can learn information based on the client data access pattern; hence, access patterns must also be obfuscated. We investigate privacy-preserving algorithms for outsourced external memory that are based on the use of data-oblivious algorithms, that is, algorithms where each possible sequence of data accesses is independent of the data values. We give new efficient data-oblivious algorithms in the outsourced external memory model for a number of fundamental graph problems. Our results include new data-oblivious external-memory methods for constructing minimum spanning trees, performing various traversals on rooted trees, answering least common ancestor queries on trees, computing biconnected components, and forming open ear decompositions. None of our algorithms make use of constant-time random oracles.Comment: 20 page

    Finding 3-edge-connected components in parallel

    Get PDF
    A parallel algorithm for finding 3-edge-connected components of an undirected graph on a CRCW PRAM is presented. The time and work complexity of this algorithm is O(logn) and O((m+n)loglogn), respectively, where n is the number of vertices and m is the number of edges in the input graph. The algorithm is based on ear decomposition and reduction of 3-edge-connectivity to 1-vertex-connectivity. This is the first 3-edge-connected component algorithm of a parallel model

    Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

    Full text link
    There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller graphs, and the ones for the Hyperlink graph use distributed or external memory. Therefore, it is natural to ask whether we can efficiently solve a broad class of graph problems on this graph in memory. This paper shows that theoretically-efficient parallel graph algorithms can scale to the largest publicly-available graphs using a single machine with a terabyte of RAM, processing them in minutes. We give implementations of theoretically-efficient parallel algorithms for 20 important graph problems. We also present the optimizations and techniques that we used in our implementations, which were crucial in enabling us to process these large graphs quickly. We show that the running times of our implementations outperform existing state-of-the-art implementations on the largest real-world graphs. For many of the problems that we consider, this is the first time they have been solved on graphs at this scale. We have made the implementations developed in this work publicly-available as the Graph-Based Benchmark Suite (GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Hammock-on-ears decomposition: a technique for the efficient parallel solution of shortest paths and other problems

    No full text
    We show how to decompose efficiently in parallel {\em any} graph into a number, γ~\tilde{\gamma}, of outerplanar subgraphs (called {\em hammocks}) satisfying certain separator properties. Our work combines and extends the sequential hammock decomposition technique introduced by G.~Frederickson and the parallel ear decomposition technique, thus we call it the {\em hammock-on-ears decomposition}. We mention that hammock-on-ears decomposition also draws from techniques in computational geometry and that an embedding of the graph does not need to be provided with the input. We achieve this decomposition in O(lognloglogn)O(\log n\log\log n) time using O(n+m)O(n+m) CREW PRAM processors, for an nn-vertex, mm-edge graph or digraph. The hammock-on-ears decomposition implies a general framework for solving graph problems efficiently. Its value is demonstrated by a variety of applications on a significant class of (di)graphs, namely that of {\em sparse (di)graphs}. This class consists of all (di)graphs which have a γ~\tilde{\gamma} between 11 and Θ(n)\Theta(n), and includes planar graphs and graphs with genus o(n)o(n). We improve previous bounds for certain instances of shortest paths and related problems, in this class of graphs. These problems include all pairs shortest paths, all pairs reachability

    Parallel Algorithmic Techniques for Combinatorial Computation

    Get PDF
    Parallel computation offers the promise of great improvements in the solution of problems that, if we were restricted to sequential computation, would take so much time that solution would be impractical. There is a drawback to the use of parallel computers, however, and that is that they seem to be harder to program. For this reason, parallel algorithms in practice are often restricted to simple problems such as matrix multiplication. Certainly this is useful, and in fact we shall see later some non-obvious uses of matrix manipulation, but many of the large problems requiring solution are of a more complex nature. In particular, an instance of a problem may be structured as an arbitrary graph or tree, rather than in the regular order of a matrix. In this paper we describe a number of algorithmic techniques that have been developed for solving such combinatorial problems. The intent of the paper is to show how the algorithmic tools we present can be used as building blocks for higher level algorithms, and to present pointers to the literature for the reader to look up the specifics of these algorithms. We make no claim to completeness; a number of techniques have been omitted for brevity or because their chief application is not combinatorial in nature. In particular we give very little attention to parallel sorting, although sorting is used as a subroutine in a number of the algorithms we describe. We also only describe algorithms, and not lower bounds, for solving problems in parallel

    Fast Computation of Small Cuts via Cycle Space Sampling

    Full text link
    We describe a new sampling-based method to determine cuts in an undirected graph. For a graph (V, E), its cycle space is the family of all subsets of E that have even degree at each vertex. We prove that with high probability, sampling the cycle space identifies the cuts of a graph. This leads to simple new linear-time sequential algorithms for finding all cut edges and cut pairs (a set of 2 edges that form a cut) of a graph. In the model of distributed computing in a graph G=(V, E) with O(log V)-bit messages, our approach yields faster algorithms for several problems. The diameter of G is denoted by Diam, and the maximum degree by Delta. We obtain simple O(Diam)-time distributed algorithms to find all cut edges, 2-edge-connected components, and cut pairs, matching or improving upon previous time bounds. Under natural conditions these new algorithms are universally optimal --- i.e. a Omega(Diam)-time lower bound holds on every graph. We obtain a O(Diam+Delta/log V)-time distributed algorithm for finding cut vertices; this is faster than the best previous algorithm when Delta, Diam = O(sqrt(V)). A simple extension of our work yields the first distributed algorithm with sub-linear time for 3-edge-connected components. The basic distributed algorithms are Monte Carlo, but they can be made Las Vegas without increasing the asymptotic complexity. In the model of parallel computing on the EREW PRAM our approach yields a simple algorithm with optimal time complexity O(log V) for finding cut pairs and 3-edge-connected components.Comment: Previous version appeared in Proc. 35th ICALP, pages 145--160, 200

    A Parallel Algorithm for Computing Minimum Spanning Trees

    Get PDF
    We present a simple and implementable algorithm that computes a minimum spanning tree of an undirected weighted graph G = (V, E) of n = |V| vertices and m = |E| edges on an EREW PRAM in O(log 3=2 n) time using n+m processors. This represents a substantial improvement in the running time over the previous results for this problem using at the same time the weakest of the PRAM models. It also implies the existence of algorithms having the same complexity bounds for the EREW PRAM, for connectivity, ear decomposition, biconnectivity, strong orientation, st-numbering and Euler tours problems

    An efficient parallel algorithm for planarity

    Get PDF
    AbstractWe describe a parallel algorithm for testing a graph for planarity, and for finding an embedding of a planar graph. For a graph on n vertices, the algorithm runs in O(log2n) steps on n processors of a parallel RAM. The previous best parallel algorithm for planarity testing also ran in O(log2n) time (J. Ja'Ja' and J. Simon, J Comput.11, No. 2 (1982), 313–328), but used a reduction to solving linear systems, and hence required Ω(M(n)log2n) processors, where M(n) is the sequential time for n × n matrix multiplication, whereas our processor bounds are within a polylog factor of optimal. The most significant aspect of our parallel algorithms is the use of a sophisticated data structure for representing sets of embeddings, the PQ-tree of K. Booth and G. Lueker, J. Comput. System Sci.13, No. 3 (1976), 335–379). Previously no parallel algorithms for PQ-trees were known. We have efficient parallel algorithms for manipulating PQ-trees, which we use in our planarity algorithm
    corecore