6,950 research outputs found

    Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

    Full text link
    There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller graphs, and the ones for the Hyperlink graph use distributed or external memory. Therefore, it is natural to ask whether we can efficiently solve a broad class of graph problems on this graph in memory. This paper shows that theoretically-efficient parallel graph algorithms can scale to the largest publicly-available graphs using a single machine with a terabyte of RAM, processing them in minutes. We give implementations of theoretically-efficient parallel algorithms for 20 important graph problems. We also present the optimizations and techniques that we used in our implementations, which were crucial in enabling us to process these large graphs quickly. We show that the running times of our implementations outperform existing state-of-the-art implementations on the largest real-world graphs. For many of the problems that we consider, this is the first time they have been solved on graphs at this scale. We have made the implementations developed in this work publicly-available as the Graph-Based Benchmark Suite (GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    An event-based architecture for solving constraint satisfaction problems

    Full text link
    Constraint satisfaction problems (CSPs) are typically solved using conventional von Neumann computing architectures. However, these architectures do not reflect the distributed nature of many of these problems and are thus ill-suited to solving them. In this paper we present a hybrid analog/digital hardware architecture specifically designed to solve such problems. We cast CSPs as networks of stereotyped multi-stable oscillatory elements that communicate using digital pulses, or events. The oscillatory elements are implemented using analog non-stochastic circuits. The non-repeating phase relations among the oscillatory elements drive the exploration of the solution space. We show that this hardware architecture can yield state-of-the-art performance on a number of CSPs under reasonable assumptions on the implementation. We present measurements from a prototype electronic chip to demonstrate that a physical implementation of the proposed architecture is robust to practical non-idealities and to validate the theory proposed.Comment: First two authors contributed equally to this wor

    Fast Dynamic Graph Algorithms for Parameterized Problems

    Full text link
    Fully dynamic graph is a data structure that (1) supports edge insertions and deletions and (2) answers problem specific queries. The time complexity of (1) and (2) are referred to as the update time and the query time respectively. There are many researches on dynamic graphs whose update time and query time are o(G)o(|G|), that is, sublinear in the graph size. However, almost all such researches are for problems in P. In this paper, we investigate dynamic graphs for NP-hard problems exploiting the notion of fixed parameter tractability (FPT). We give dynamic graphs for Vertex Cover and Cluster Vertex Deletion parameterized by the solution size kk. These dynamic graphs achieve almost the best possible update time O(poly(k)logn)O(\mathrm{poly}(k)\log n) and the query time O(f(poly(k),k))O(f(\mathrm{poly}(k),k)), where f(n,k)f(n,k) is the time complexity of any static graph algorithm for the problems. We obtain these results by dynamically maintaining an approximate solution which can be used to construct a small problem kernel. Exploiting the dynamic graph for Cluster Vertex Deletion, as a corollary, we obtain a quasilinear-time (polynomial) kernelization algorithm for Cluster Vertex Deletion. Until now, only quadratic time kernelization algorithms are known for this problem. We also give a dynamic graph for Chromatic Number parameterized by the solution size of Cluster Vertex Deletion, and a dynamic graph for bounded-degree Feedback Vertex Set parameterized by the solution size. Assuming the parameter is a constant, each dynamic graph can be updated in O(logn)O(\log n) time and can compute a solution in O(1)O(1) time. These results are obtained by another approach.Comment: SWAT 2014 to appea

    Fast Local Computation Algorithms

    Full text link
    For input xx, let F(x)F(x) denote the set of outputs that are the "legal" answers for a computational problem FF. Suppose xx and members of F(x)F(x) are so large that there is not time to read them in their entirety. We propose a model of {\em local computation algorithms} which for a given input xx, support queries by a user to values of specified locations yiy_i in a legal output yF(x)y \in F(x). When more than one legal output yy exists for a given xx, the local computation algorithm should output in a way that is consistent with at least one such yy. Local computation algorithms are intended to distill the common features of several concepts that have appeared in various algorithmic subfields, including local distributed computation, local algorithms, locally decodable codes, and local reconstruction. We develop a technique, based on known constructions of small sample spaces of kk-wise independent random variables and Beck's analysis in his algorithmic approach to the Lov{\'{a}}sz Local Lemma, which under certain conditions can be applied to construct local computation algorithms that run in {\em polylogarithmic} time and space. We apply this technique to maximal independent set computations, scheduling radio network broadcasts, hypergraph coloring and satisfying kk-SAT formulas.Comment: A preliminary version of this paper appeared in ICS 2011, pp. 223-23

    Message passing for the coloring problem: Gallager meets Alon and Kahale

    Full text link
    Message passing algorithms are popular in many combinatorial optimization problems. For example, experimental results show that {\em survey propagation} (a certain message passing algorithm) is effective in finding proper kk-colorings of random graphs in the near-threshold regime. In 1962 Gallager introduced the concept of Low Density Parity Check (LDPC) codes, and suggested a simple decoding algorithm based on message passing. In 1994 Alon and Kahale exhibited a coloring algorithm and proved its usefulness for finding a kk-coloring of graphs drawn from a certain planted-solution distribution over kk-colorable graphs. In this work we show an interpretation of Alon and Kahale's coloring algorithm in light of Gallager's decoding algorithm, thus showing a connection between the two problems - coloring and decoding. This also provides a rigorous evidence for the usefulness of the message passing paradigm for the graph coloring problem. Our techniques can be applied to several other combinatorial optimization problems and networking-related issues.Comment: 11 page

    Algorithmic and enumerative aspects of the Moser-Tardos distribution

    Full text link
    Moser & Tardos have developed a powerful algorithmic approach (henceforth "MT") to the Lovasz Local Lemma (LLL); the basic operation done in MT and its variants is a search for "bad" events in a current configuration. In the initial stage of MT, the variables are set independently. We examine the distributions on these variables which arise during intermediate stages of MT. We show that these configurations have a more or less "random" form, building further on the "MT-distribution" concept of Haeupler et al. in understanding the (intermediate and) output distribution of MT. This has a variety of algorithmic applications; the most important is that bad events can be found relatively quickly, improving upon MT across the complexity spectrum: it makes some polynomial-time algorithms sub-linear (e.g., for Latin transversals, which are of basic combinatorial interest), gives lower-degree polynomial run-times in some settings, transforms certain super-polynomial-time algorithms into polynomial-time ones, and leads to Las Vegas algorithms for some coloring problems for which only Monte Carlo algorithms were known. We show that in certain conditions when the LLL condition is violated, a variant of the MT algorithm can still produce a distribution which avoids most of the bad events. We show in some cases this MT variant can run faster than the original MT algorithm itself, and develop the first-known criterion for the case of the asymmetric LLL. This can be used to find partial Latin transversals -- improving upon earlier bounds of Stein (1975) -- among other applications. We furthermore give applications in enumeration, showing that most applications (where we aim for all or most of the bad events to be avoided) have many more solutions than known before by proving that the MT-distribution has "large" min-entropy and hence that its support-size is large
    corecore