2,657 research outputs found

    Algorithms and Lower Bounds for Cycles and Walks: Small Space and Sparse Graphs

    Get PDF

    Algebraic Methods in the Congested Clique

    Full text link
    In this work, we use algebraic methods for studying distance computation and subgraph detection tasks in the congested clique model. Specifically, we adapt parallel matrix multiplication implementations to the congested clique, obtaining an O(n12/ω)O(n^{1-2/\omega}) round matrix multiplication algorithm, where ω<2.3728639\omega < 2.3728639 is the exponent of matrix multiplication. In conjunction with known techniques from centralised algorithmics, this gives significant improvements over previous best upper bounds in the congested clique model. The highlight results include: -- triangle and 4-cycle counting in O(n0.158)O(n^{0.158}) rounds, improving upon the O(n1/3)O(n^{1/3}) triangle detection algorithm of Dolev et al. [DISC 2012], -- a (1+o(1))(1 + o(1))-approximation of all-pairs shortest paths in O(n0.158)O(n^{0.158}) rounds, improving upon the O~(n1/2)\tilde{O} (n^{1/2})-round (2+o(1))(2 + o(1))-approximation algorithm of Nanongkai [STOC 2014], and -- computing the girth in O(n0.158)O(n^{0.158}) rounds, which is the first non-trivial solution in this model. In addition, we present a novel constant-round combinatorial algorithm for detecting 4-cycles.Comment: This is work is a merger of arxiv:1412.2109 and arxiv:1412.266

    Finding Even Cycles Faster via Capped k-Walks

    Full text link
    In this paper, we consider the problem of finding a cycle of length 2k2k (a C2kC_{2k}) in an undirected graph GG with nn nodes and mm edges for constant k2k\ge2. A classic result by Bondy and Simonovits [J.Comb.Th.'74] implies that if m100kn1+1/km \ge100k n^{1+1/k}, then GG contains a C2kC_{2k}, further implying that one needs to consider only graphs with m=O(n1+1/k)m = O(n^{1+1/k}). Previously the best known algorithms were an O(n2)O(n^2) algorithm due to Yuster and Zwick [J.Disc.Math'97] as well as a O(m2(1+k/21)/(k+1))O(m^{2-(1+\lceil k/2\rceil^{-1})/(k+1)}) algorithm by Alon et al. [Algorithmica'97]. We present an algorithm that uses O(m2k/(k+1))O(m^{2k/(k+1)}) time and finds a C2kC_{2k} if one exists. This bound is O(n2)O(n^2) exactly when m=Θ(n1+1/k)m=\Theta(n^{1+1/k}). For 44-cycles our new bound coincides with Alon et al., while for every k>2k>2 our bound yields a polynomial improvement in mm. Yuster and Zwick noted that it is "plausible to conjecture that O(n2)O(n^2) is the best possible bound in terms of nn". We show "conditional optimality": if this hypothesis holds then our O(m2k/(k+1))O(m^{2k/(k+1)}) algorithm is tight as well. Furthermore, a folklore reduction implies that no combinatorial algorithm can determine if a graph contains a 66-cycle in time O(m3/2ϵ)O(m^{3/2-\epsilon}) for any ϵ>0\epsilon>0 under the widely believed combinatorial BMM conjecture. Coupled with our main result, this gives tight bounds for finding 66-cycles combinatorially and also separates the complexity of finding 44- and 66-cycles giving evidence that the exponent of mm in the running time should indeed increase with kk. The key ingredient in our algorithm is a new notion of capped kk-walks, which are walks of length kk that visit only nodes according to a fixed ordering. Our main technical contribution is an involved analysis proving several properties of such walks which may be of independent interest.Comment: To appear at STOC'1

    Balanced Families of Perfect Hash Functions and Their Applications

    Full text link
    The construction of perfect hash functions is a well-studied topic. In this paper, this concept is generalized with the following definition. We say that a family of functions from [n][n] to [k][k] is a δ\delta-balanced (n,k)(n,k)-family of perfect hash functions if for every S[n]S \subseteq [n], S=k|S|=k, the number of functions that are 1-1 on SS is between T/δT/\delta and δT\delta T for some constant T>0T>0. The standard definition of a family of perfect hash functions requires that there will be at least one function that is 1-1 on SS, for each SS of size kk. In the new notion of balanced families, we require the number of 1-1 functions to be almost the same (taking δ\delta to be close to 1) for every such SS. Our main result is that for any constant δ>1\delta > 1, a δ\delta-balanced (n,k)(n,k)-family of perfect hash functions of size 2O(kloglogk)logn2^{O(k \log \log k)} \log n can be constructed in time 2O(kloglogk)nlogn2^{O(k \log \log k)} n \log n. Using the technique of color-coding we can apply our explicit constructions to devise approximation algorithms for various counting problems in graphs. In particular, we exhibit a deterministic polynomial time algorithm for approximating both the number of simple paths of length kk and the number of simple cycles of size kk for any kO(lognlogloglogn)k \leq O(\frac{\log n}{\log \log \log n}) in a graph with nn vertices. The approximation is up to any fixed desirable relative error

    Faster Algorithms for Rectangular Matrix Multiplication

    Full text link
    Let {\alpha} be the maximal value such that the product of an n x n^{\alpha} matrix by an n^{\alpha} x n matrix can be computed with n^{2+o(1)} arithmetic operations. In this paper we show that \alpha>0.30298, which improves the previous record \alpha>0.29462 by Coppersmith (Journal of Complexity, 1997). More generally, we construct a new algorithm for multiplying an n x n^k matrix by an n^k x n matrix, for any value k\neq 1. The complexity of this algorithm is better than all known algorithms for rectangular matrix multiplication. In the case of square matrix multiplication (i.e., for k=1), we recover exactly the complexity of the algorithm by Coppersmith and Winograd (Journal of Symbolic Computation, 1990). These new upper bounds can be used to improve the time complexity of several known algorithms that rely on rectangular matrix multiplication. For example, we directly obtain a O(n^{2.5302})-time algorithm for the all-pairs shortest paths problem over directed graphs with small integer weights, improving over the O(n^{2.575})-time algorithm by Zwick (JACM 2002), and also improve the time complexity of sparse square matrix multiplication.Comment: 37 pages; v2: some additions in the acknowledgment

    Fast Sparse Matrix Multiplication

    Full text link
    Let A and B two n n matrices over a ring R (e.g., the reals or the integers) each containing at most m non-zero elements. We present a new algorithm that multiplies A and B using O(m ) algebraic operations (i.e., multiplications, additions and subtractions) over R. The naive matrix multiplication algorithm, on the other hand, may need to perform #(mn) operations to accomplish the same task. For , the new algorithm performs an almost optimal number of only n operations. For m the new algorithm is also faster than the best known matrix multiplication algorithm for dense matrices which uses O(n ) algebraic operations. The new algorithm is obtained using a surprisingly straightforward combination of a simple combinatorial idea and existing fast rectangular matrix multiplication algorithms. We also obtain improved algorithms for the multiplication of more than two sparse matrices
    corecore