61 research outputs found

    Bipartite Perfect Matching in Pseudo-Deterministic NC

    Get PDF
    We present a pseudo-deterministic NC algorithm for finding perfect matchings in bipartite graphs. Specifically, our algorithm is a randomized parallel algorithm which uses poly(n) processors, poly(log n) depth, poly(log n) random bits, and outputs for each bipartite input graph a unique perfect matching with high probability. That is, on the same graph it returns the same matching for almost all choices of randomness. As an immediate consequence we also find a pseudo-deterministic NC algorithm for constructing a depth first search (DFS) tree. We introduce a method for computing the union of all min-weight perfect matchings of a weighted graph in RNC and a novel set of weight assignments which in combination enable isolating a unique matching in a graph. We then show a way to use pseudo-deterministic algorithms to reduce the number of random bits used by general randomized algorithms. The main idea is that random bits can be reused by successive invocations of pseudo-deterministic randomized algorithms. We use the technique to show an RNC algorithm for constructing a depth first search (DFS) tree using only O(log^2 n) bits whereas the previous best randomized algorithm used O(log^7 n), and a new sequential randomized algorithm for the set-maxima problem which uses fewer random bits than the previous state of the art. Furthermore, we prove that resolving the decision question NC = RNC, would imply an NC algorithm for finding a bipartite perfect matching and finding a DFS tree in NC. This is not implied by previous randomized NC search algorithms for finding bipartite perfect matching, but is implied by the existence of a pseudo-deterministic NC search algorithm

    Matching Is as Easy as the Decision Problem, in the NC Model

    Get PDF
    Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of randomized NC matching algorithms [KUW85, MVV87]. Over the last five years, the theoretical computer science community has launched a relentless attack on this question, leading to the discovery of several powerful ideas. We give what appears to be the culmination of this line of work: An NC algorithm for finding a minimum-weight perfect matching in a general graph with polynomially bounded edge weights, provided it is given an oracle for the decision problem. Consequently, for settling the main open problem, it suffices to obtain an NC algorithm for the decision problem. We believe this new fact has qualitatively changed the nature of this open problem. All known efficient matching algorithms for general graphs follow one of two approaches: given by Edmonds [Edm65] and Lov\'asz [Lov79]. Our oracle-based algorithm follows a new approach and uses many of the ideas discovered in the last five years. The difficulty of obtaining an NC perfect matching algorithm led researchers to study matching vis-a-vis clever relaxations of the class NC. In this vein, recently Goldwasser and Grossman [GG15] gave a pseudo-deterministic RNC algorithm for finding a perfect matching in a bipartite graph, i.e., an RNC algorithm with the additional requirement that on the same graph, it should return the same (i.e., unique) perfect matching for almost all choices of random bits. A corollary of our reduction is an analogous algorithm for general graphs.Comment: Appeared in ITCS 202

    Matroid Intersection: A Pseudo-Deterministic Parallel Reduction from Search to Weighted-Decision

    Get PDF
    We study the matroid intersection problem from the parallel complexity perspective. Given two matroids over the same ground set, the problem asks to decide whether they have a common base and its search version asks to find a common base, if one exists. Another widely studied variant is the weighted decision version where with the two matroids, we are given small weights on the ground set elements and a target weight W, and the question is to decide whether there is a common base of weight at least W. From the perspective of parallel complexity, the relation between the search and the decision versions is not well understood. We make a significant progress on this question by giving a pseudo-deterministic parallel (NC) algorithm for the search version that uses an oracle access to the weighted decision. The notion of pseudo-deterministic NC was recently introduced by Goldwasser and Grossman [Shafi Goldwasser and Ofer Grossman, 2017], which is a relaxation of NC. A pseudo-deterministic NC algorithm for a search problem is a randomized NC algorithm that, for a given input, outputs a fixed solution with high probability. In case the given matroids are linearly representable, our result implies a pseudo-deterministic NC algorithm (without the weighted decision oracle). This resolves an open question posed by Anari and Vazirani [Nima Anari and Vijay V. Vazirani, 2020]

    Planar Maximum Matching: Towards a Parallel Algorithm

    Get PDF

    Parallel Algorithms for Depth-First Search

    Get PDF
    In this paper we examine parallel algorithms for performing a depth-first search (DFS) of a directed or undirected graph in sub-linear time. this subject is interesting in part because DFS seemed at first to be an inherently sequential process, and for a long time many researchers believed that no such algorithms existed. We survey three seminal papers on the subject. The first one proves that a special case of DFS is (in all likelihood) inherently sequential; the second shows that DFS for planar undirected graphs is in NC; and the third shows that DFS for general undirected graphs is in RNC. We also discuss randomnized algorithms, P-completeness and matching, three topics that are essential for understanding and appreciating the results in these papers

    Algebraic and Combinatorial Methods in Computational Complexity

    Get PDF
    Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Another surprising connection is that the algebraic techniques invented to show lower bounds now prove useful to develop efficient algorithms. For example, Williams showed how to use the polynomial method to obtain faster all-pair-shortest-path algorithms. This emphases once again the central role of algebra in computer science. The seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and this seminar can play an important role in educating a diverse community about the latest new techniques, spurring further progress

    NC Algorithms for Weighted Planar Perfect Matching and Related Problems

    Get PDF
    Consider a planar graph G=(V,E) with polynomially bounded edge weight function w:E -> [0, poly(n)]. The main results of this paper are NC algorithms for finding minimum weight perfect matching in G. In order to solve this problems we develop a new relatively simple but versatile framework that is combinatorial in spirit. It handles the combinatorial structure of matchings directly and needs to only know weights of appropriately defined matchings from algebraic subroutines. Moreover, using novel planarity preserving reductions, we show how to find: maximum weight matching in G when G is bipartite; maximum multiple-source multiple-sink flow in G where c:E -> [1, poly(n)] is a polynomially bounded edge capacity function; minimum weight f-factor in G where f:V -> [1, poly(n)]; min-cost flow in G where c:E -> [1, poly(n)] is a polynomially bounded edge capacity function and b:V -> [1, poly(n)] is a polynomially bounded vertex demand function. There have been no known NC algorithms for these problems previously

    Linearly Representable Submodular Functions: An Algebraic Algorithm for Minimization

    Get PDF
    • …
    corecore