16,437 research outputs found

    New geometric techniques for linear programming and graph partitioning

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (leaves 79-82).In this thesis, we advance a collection of new geometric techniques for the analysis of combinatorial algorithms. Using these techniques, we resolve several longstanding questions in the theory of linear programming, polytope theory, spectral graph theory, and graph partitioning. The thesis consists of two main parts. In the first part, which is joint work with Daniel Spielman, we present the first randomized polynomial-time simplex algorithm for linear programming, answering a question that has been open for over fifty years. Like the other known polynomial-time algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input. To do this, we begin by reducing the input linear program to a special form in which we merely need to certify boundedness of the linear program. As boundedness does not depend upon the right-hand-side vector, we run a modified version of the shadow-vertex simplex method in which we start with a random right-hand-side vector and then modify this vector during the course of the algorithm. This allows us to avoid bounding the diameter of the original polytope.(cont.) Our analysis rests on a geometric statement of independent interest: given a polytope ... in isotropic position, if one makes a polynomially small perturbation to b then the number of edges of the projection of the perturbed polytope onto a random 2-dimensional subspace is expected to be polynomial. In the second part of the thesis, we address two long-open questions about finding good separators in graphs of bounded genus and degree: 1. It is a classical result of Gilbert, Hutchinson, and Tarjan [25] that one can find asymptotically optimal separators on these graphs if he is given both the graph and an embedding of it onto a low genus surface. Does there exist a simple, efficient algorithm to find these separators given only the graph and not the embedding? 2. In practice, spectral partitioning heuristics work extremely well on these graphs. Is there a theoretical reason why this should be the case? We resolve these two questions by showing that a simple spectral algorithm finds separators of cut ratio O( g/n) and vertex bisectors of size O(Vng) in these graphs, both of which are optimal. As our main technical lemma, we prove an O(g/n) bound on the second smallest eigenvalue of the Laplacian of such graphs and show that this is tight, thereby resolving a conjecture of Spielman and Teng.(cont.) While this lemma is essentially combinatorial in nature, its proof comes from continuous mathematics, drawing on the theory of circle packings and the geometry of compact Riemann surfaces. While the questions addressed in the two parts of the thesis are quite different, we show that a common methodology runs through their solutions. We believe that this methodology provides a powerful approach to the analysis of algorithms that will prove useful in a variety of broader contexts.by Jonathan A. Kelner.Ph.D

    Scalable Kernelization for Maximum Independent Sets

    Get PDF
    The most efficient algorithms for finding maximum independent sets in both theory and practice use reduction rules to obtain a much smaller problem instance called a kernel. The kernel can then be solved quickly using exact or heuristic algorithms---or by repeatedly kernelizing recursively in the branch-and-reduce paradigm. It is of critical importance for these algorithms that kernelization is fast and returns a small kernel. Current algorithms are either slow but produce a small kernel, or fast and give a large kernel. We attempt to accomplish both of these goals simultaneously, by giving an efficient parallel kernelization algorithm based on graph partitioning and parallel bipartite maximum matching. We combine our parallelization techniques with two techniques to accelerate kernelization further: dependency checking that prunes reductions that cannot be applied, and reduction tracking that allows us to stop kernelization when reductions become less fruitful. Our algorithm produces kernels that are orders of magnitude smaller than the fastest kernelization methods, while having a similar execution time. Furthermore, our algorithm is able to compute kernels with size comparable to the smallest known kernels, but up to two orders of magnitude faster than previously possible. Finally, we show that our kernelization algorithm can be used to accelerate existing state-of-the-art heuristic algorithms, allowing us to find larger independent sets faster on large real-world networks and synthetic instances.Comment: Extended versio

    High-Quality Shared-Memory Graph Partitioning

    Full text link
    Partitioning graphs into blocks of roughly equal size such that few edges run between blocks is a frequently needed operation in processing graphs. Recently, size, variety, and structural complexity of these networks has grown dramatically. Unfortunately, previous approaches to parallel graph partitioning have problems in this context since they often show a negative trade-off between speed and quality. We present an approach to multi-level shared-memory parallel graph partitioning that guarantees balanced solutions, shows high speed-ups for a variety of large graphs and yields very good quality independently of the number of cores used. For example, on 31 cores, our algorithm partitions our largest test instance into 16 blocks cutting less than half the number of edges than our main competitor when both algorithms are given the same amount of time. Important ingredients include parallel label propagation for both coarsening and improvement, parallel initial partitioning, a simple yet effective approach to parallel localized local search, and fast locality preserving hash tables

    On Characterizing the Data Access Complexity of Programs

    Full text link
    Technology trends will cause data movement to account for the majority of energy expenditure and execution time on emerging computers. Therefore, computational complexity will no longer be a sufficient metric for comparing algorithms, and a fundamental characterization of data access complexity will be increasingly important. The problem of developing lower bounds for data access complexity has been modeled using the formalism of Hong & Kung's red/blue pebble game for computational directed acyclic graphs (CDAGs). However, previously developed approaches to lower bounds analysis for the red/blue pebble game are very limited in effectiveness when applied to CDAGs of real programs, with computations comprised of multiple sub-computations with differing DAG structure. We address this problem by developing an approach for effectively composing lower bounds based on graph decomposition. We also develop a static analysis algorithm to derive the asymptotic data-access lower bounds of programs, as a function of the problem size and cache size
    corecore