17 research outputs found

    Low Randomness Rumor Spreading via Hashing

    Get PDF
    International audienceWe consider the classical rumor spreading problem, where a piece of information must be disseminated from a single node to all n nodes of a given network. We devise two simple push-based protocols, in which nodes choose the neighbor they send the information to in each round using pairwise independent hash functions, or a pseudo-random generator, respectively. For several well-studied topologies our algorithms use exponentially fewer random bits than previous protocols. For example, in complete graphs, expanders, and random graphs only a polylogarithmic number of random bits are needed in total to spread the rumor in O(log n) rounds with high probability. Previous explicit algorithms require Omega(n) random bits to achieve the same round complexity. For complete graphs, the amount of randomness used by our hashing-based algorithm is within an O(log n)-factor of the theoretical minimum determined by [Giakkoupis and Woelfel, 2011]

    Benchmark Graphs for Practical Graph Isomorphism

    Get PDF
    The state-of-the-art solvers for the graph isomorphism problem can readily solve generic instances with tens of thousands of vertices. Indeed, experiments show that on inputs without particular combinatorial structure the algorithms scale almost linearly. In fact, it is non-trivial to create challenging instances for such solvers and the number of difficult benchmark graphs available is quite limited. We describe a construction to efficiently generate small instances for the graph isomorphism problem that are difficult or even infeasible for said solvers. Up to this point the only other available instances posing challenges for isomorphism solvers were certain incidence structures of combinatorial objects (such as projective planes, Hadamard matrices, Latin squares, etc.). Experiments show that starting from 1500 vertices our new instances are several orders of magnitude more difficult on comparable input sizes. More importantly, our method is generic and efficient in the sense that one can quickly create many isomorphism instances on a desired number of vertices. In contrast to this, said combinatorial objects are rare and difficult to generate and with the new construction it is possible to generate an abundance of instances of arbitrary size. Our construction hinges on the multipedes of Gurevich and Shelah and the Cai-F\"{u}rer-Immerman gadgets that realize a certain abelian automorphism group and have repeatedly played a role in the context of graph isomorphism. Exploring limits of such constructions, we also explain that there are group theoretic obstructions to generalizing the construction with non-abelian gadgets.Comment: 32 page

    A Randomized Polynomial Kernelization for Vertex Cover with a Smaller Parameter

    Get PDF
    In the Vertex Cover problem we are given a graph G=(V,E)G=(V,E) and an integer kk and have to determine whether there is a set XVX\subseteq V of size at most kk such that each edge in EE has at least one endpoint in XX. The problem can be easily solved in time O(2k)O^*(2^k), making it fixed-parameter tractable (FPT) with respect to kk. While the fastest known algorithm takes only time O(1.2738k)O^*(1.2738^k), much stronger improvements have been obtained by studying parameters that are smaller than kk. Apart from treewidth-related results, the arguably best algorithm for Vertex Cover runs in time O(2.3146p)O^*(2.3146^p), where p=kLP(G)p=k-LP(G) is only the excess of the solution size kk over the best fractional vertex cover (Lokshtanov et al.\ TALG 2014). Since pkp\leq k but kk cannot be bounded in terms of pp alone, this strictly increases the range of tractable instances. Recently, Garg and Philip (SODA 2016) greatly contributed to understanding the parameterized complexity of the Vertex Cover problem. They prove that 2LP(G)MM(G)2LP(G)-MM(G) is a lower bound for the vertex cover size of GG, where MM(G)MM(G) is the size of a largest matching of GG, and proceed to study parameter =k(2LP(G)MM(G))\ell=k-(2LP(G)-MM(G)). They give an algorithm of running time O(3)O^*(3^\ell), proving that Vertex Cover is FPT in \ell. It can be easily observed that p\ell\leq p whereas pp cannot be bounded in terms of \ell alone. We complement the work of Garg and Philip by proving that Vertex Cover admits a randomized polynomial kernelization in terms of \ell, i.e., an efficient preprocessing to size polynomial in \ell. This improves over parameter p=kLP(G)p=k-LP(G) for which this was previously known (Kratsch and Wahlstr\"om FOCS 2012).Comment: Full version of ESA 2016 pape

    An Efficient Parallel Algorithm for Spectral Sparsification of Laplacian and SDDM Matrix Polynomials

    Full text link
    For "large" class C\mathcal{C} of continuous probability density functions (p.d.f.), we demonstrate that for every wCw\in\mathcal{C} there is mixture of discrete Binomial distributions (MDBD) with TNϕw/δT\geq N\sqrt{\phi_{w}/\delta} distinct Binomial distributions B(,N)B(\cdot,N) that δ\delta-approximates a discretized p.d.f. w^(i/N)w(i/N)/[=0Nw(/N)]\widehat{w}(i/N)\triangleq w(i/N)/[\sum_{\ell=0}^{N}w(\ell/N)] for all i[3:N3]i\in[3:N-3], where ϕwmaxx[0,1]w(x)\phi_{w}\geq\max_{x\in[0,1]}|w(x)|. Also, we give two efficient parallel algorithms to find such MDBD. Moreover, we propose a sequential algorithm that on input MDBD with N=2kN=2^k for kN+k\in\mathbb{N}_{+} that induces a discretized p.d.f. β\beta, B=DMB=D-M that is either Laplacian or SDDM matrix and parameter ϵ(0,1)\epsilon\in(0,1), outputs in O^(ϵ2m+ϵ4nT)\widehat{O}(\epsilon^{-2}m + \epsilon^{-4}nT) time a spectral sparsifier DM^NϵDDi=0Nβi(D1M)iD-\widehat{M}_{N} \approx_{\epsilon} D-D\sum_{i=0}^{N}\beta_{i}(D^{-1} M)^i of a matrix-polynomial, where O^()\widehat{O}(\cdot) notation hides poly(logn,logN)\mathrm{poly}(\log n,\log N) factors. This improves the Cheng et al.'s [CCLPT15] algorithm whose run time is O^(ϵ2mN2+NT)\widehat{O}(\epsilon^{-2} m N^2 + NT). Furthermore, our algorithm is parallelizable and runs in work O^(ϵ2m+ϵ4nT)\widehat{O}(\epsilon^{-2}m + \epsilon^{-4}nT) and depth O(logNpoly(logn)+logT)O(\log N\cdot\mathrm{poly}(\log n)+\log T). Our main algorithmic contribution is to propose the first efficient parallel algorithm that on input continuous p.d.f. wCw\in\mathcal{C}, matrix B=DMB=D-M as above, outputs a spectral sparsifier of matrix-polynomial whose coefficients approximate component-wise the discretized p.d.f. w^\widehat{w}. Our results yield the first efficient and parallel algorithm that runs in nearly linear work and poly-logarithmic depth and analyzes the long term behaviour of Markov chains in non-trivial settings. In addition, we strengthen the Spielman and Peng's [PS14] parallel SDD solver

    LDRD final report : combinatorial optimization with demands.

    Full text link

    Balanced Allocation on Hypergraphs

    Full text link
    We consider a variation of balls-into-bins which randomly allocates mm balls into nn bins. Following Godfrey's model (SODA, 2008), we assume that each ball tt, 1tm1\le t\le m, comes with a hypergraph H(t)={B1,B2,,Bst}\mathcal{H}^{(t)}=\{B_1,B_2,\ldots,B_{s_t}\}, and each edge BH(t)B\in\mathcal{H}^{(t)} contains at least a logarithmic number of bins. Given d2d\ge 2, our dd-choice algorithm chooses an edge BH(t)B\in \mathcal{H}^{(t)}, uniformly at random, and then chooses a set DD of dd random bins from the selected edge BB. The ball is allocated to a least-loaded bin from DD, with ties are broken randomly. We prove that if the hypergraphs H(1),,H(m)\mathcal{H}^{(1)},\ldots, \mathcal{H}^{(m)} satisfy a \emph{balancedness} condition and have low \emph{pair visibility}, then after allocating m=Θ(n)m=\Theta(n) balls, the maximum number of balls at any bin, called the \emph{maximum load}, is at most logdlogn+O(1)\log_d\log n+O(1), with high probability. The balancedness condition enforces that bins appear almost uniformly within the hyperedges of H(t)\mathcal{H}^{(t)}, 1tm1\le t\le m, while the pair visibility condition measures how frequently a pair of bins is chosen during the allocation of balls. Moreover, we establish a lower bound for the maximum load attained by the balanced allocation for a sequence of hypergraphs in terms of pair visibility, showing the relevance of the visibility parameter to the maximum load. In Godfrey's model, each ball is forced to probe all bins in a randomly selected hyperedge and the ball is then allocated in a least-loaded bin. Godfrey showed that if each H(t)\mathcal{H}^{(t)}, 1tm1\le t\le m, is balanced and m=O(n)m=O(n), then the maximum load is at most one, with high probability. However, we apply the power of dd choices paradigm, and only query the load information of dd random bins per ball, while achieving very slow growth in the maximum load

    Uniformly automatic classes of finite structures

    Get PDF
    We investigate the recently introduced concept of uniformly tree-automatic classes in the realm of parameterized complexity theory. Roughly speaking, a class of finite structures is uniformly tree-automatic if it can be presented by a set of finite trees and a tuple of automata. A tree t encodes a structure and an element of this structure is encoded by a labeling of t. The automata are used to present the relations of the structure. We use this formalism to obtain algorithmic meta-theorems for first-order logic and in some cases also monadic second-order logic on classes of finite Boolean algebras, finite groups, and graphs of bounded tree-depth. Our main concern is the efficiency of this approach with respect to the hidden parameter dependence (size of the formula). We develop a method to analyze the complexity of uniformly tree-automatic presentations, which allows us to give upper bounds for the runtime of the automata-based model checking algorithm on the presented class. It turns out that the parameter dependence is elementary for all the above mentioned classes. Additionally we show that one can lift the FPT results, which are obtained by our method, from a class C to the closure of C under direct products with only a singly exponential blow-up in the parameter dependence

    On space efficiency of algorithms working on structural decompositions of graphs

    Get PDF
    Dynamic programming on path and tree decompositions of graphs is a technique that is ubiquitous in the field of parameterized and exponential-time algorithms. However, one of its drawbacks is that the space usage is exponential in the decomposition's width. Following the work of Allender et al. [Theory of Computing, '14], we investigate whether this space complexity explosion is unavoidable. Using the idea of reparameterization of Cai and Juedes [J. Comput. Syst. Sci., '03], we prove that the question is closely related to a conjecture that the Longest Common Subsequence problem parameterized by the number of input strings does not admit an algorithm that simultaneously uses XP time and FPT space. Moreover, we complete the complexity landscape sketched for pathwidth and treewidth by Allender et al. by considering the parameter tree-depth. We prove that computations on tree-depth decompositions correspond to a model of non-deterministic machines that work in polynomial time and logarithmic space, with access to an auxiliary stack of maximum height equal to the decomposition's depth. Together with the results of Allender et al., this describes a hierarchy of complexity classes for polynomial-time non-deterministic machines with different restrictions on the access to working space, which mirrors the classic relations between treewidth, pathwidth, and tree-depth.Comment: An extended abstract appeared in the proceedings of STACS'16. The new version is augmented with a space-efficient algorithm for Dominating Set using the Chinese remainder theore
    corecore