30 research outputs found

    Approximating Maximum Integral Multiflows on Bounded Genus Graphs

    Get PDF
    We devise the first constant-factor approximation algorithm for finding an integral multi-commodity flow of maximum total value for instances where the supply graph together with the demand edges can be embedded on an orientable surface of bounded genus. This extends recent results for planar instances. Our techniques include an uncrossing algorithm, which is significantly more difficult than in the planar case, a partition of the cycles in the support of an LP solution into free homotopy classes, and a new rounding procedure for freely homotopic non-separating cycles

    Approximating maximum integral multiflows on bounded genus graphs

    Get PDF
    We devise the first constant-factor approximation algorithm for finding an integral multi-commodity flow of maximum total value for instances where the supply graph together with the demand edges can be embedded on an orientable surface of bounded genus. This extends recent results for planar instances

    Combinatorial Optimization

    Get PDF
    Combinatorial Optimization is a very active field that benefits from bringing together ideas from different areas, e.g., graph theory and combinatorics, matroids and submodularity, connectivity and network flows, approximation algorithms and mathematical programming, discrete and computational geometry, discrete and continuous problems, algebraic and geometric methods, and applications. We continued the long tradition of triannual Oberwolfach workshops, bringing together the best researchers from the above areas, discovering new connections, and establishing new and deepening existing international collaborations

    Vertex sparsification and universal rounding algorithms

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 125-129).Suppose we are given a gigantic communication network, but are only interested in a small number of nodes (clients). There are many routing problems we could be asked to solve for our clients. Is there a much smaller network - that we could write down on a sheet of paper and put in our pocket - that approximately preserves all the relevant communication properties of the original network? As we will demonstrate, the answer to this question is YES, and we call this smaller network a vertex sparsifier. In fact, if we are asked to solve a sequence of optimization problems characterized by cuts or flows, we can compute a good vertex sparsifier ONCE and discard the original network. We can run our algorithms (or approximation algorithms) on the vertex sparsifier as a proxy - and still recover approximately optimal solutions in the original network. This novel pattern saves both space (because the network we store is much smaller) and time (because our algorithms run on a much smaller graph). Additionally, we apply these ideas to obtain a master theorem for graph partitioning problems - as long as the integrality gap of a standard linear programming relaxation is bounded on trees, then the integrality gap is at most a logarithmic factor larger for general networks. This result implies optimal bounds for many well studied graph partitioning problems as a special case, and even yields optimal bounds for more challenging problems that had not been studied before. Morally, these results are all based on the idea that even though the structure of optimal solutions can be quite complicated, these solution values can be approximated by crude (even linear) functions.by Ankur Moitra.Ph.D

    Hardness results and approximation algorithms for some problems on graphs

    Get PDF
    This thesis has two parts. In the first part, we study some graph covering problems with a non-local covering rule that allows a "remote" node to be covered by repeatedly applying the covering rule. In the second part, we provide some results on the packing of Steiner trees. In the Propagation problem we are given a graph GG and the goal is to find a minimum-sized set of nodes SS that covers all of the nodes, where a node vv is covered if (1) vv is in SS, or (2) vv has a neighbor uu such that uu and all of its neighbors except vv are covered. Rule (2) is called the propagation rule, and it is applied iteratively. Throughout, we use nn to denote the number of nodes in the input graph. We prove that the path-width parameter is a lower bound for the optimal value. We show that the Propagation problem is NP-hard in planar weighted graphs. We prove that it is NP-hard to approximate the optimal value to within a factor of 2log1ϵn2^{\log^{1-\epsilon}{n}} in weighted (general) graphs. The second problem that we study is the Power Dominating Set problem. This problem has two covering rules. The first rule is the same as the domination rule as in the Dominating Set problem, and the second rule is the same propagation rule as in the Propagation problem. We show that it is hard to approximate the optimal value to within a factor of 2log1ϵn2^{\log^{1-\epsilon}{n}} in general graphs. We design and analyze an approximation algorithm with a performance guarantee of O(n)O(\sqrt{n}) on planar graphs. We formulate a common generalization of the above two problems called the General Propagation problem. We reformulate this general problem as an orientation problem, and based on this reformulation we design a dynamic programming algorithm. The algorithm runs in linear time when the graph has tree-width O(1)O(1). Motivated by applications, we introduce a restricted version of the problem that we call the \ell-round General Propagation problem. We give a PTAS for the \ell-round General Propagation problem on planar graphs, for small values of \ell. Our dynamic programming algorithms and the PTAS can be extended to other problems in networks with similar propagation rules. As an example we discuss the extension of our results to the Target Set Selection problem in the threshold model of the diffusion processes. In the second part of the thesis, we focus on the Steiner Tree Packing problem. In this problem, we are given a graph GG and a subset of terminal nodes RV(G)R\subseteq V(G). The goal in this problem is to find a maximum cardinality set of disjoint trees that each spans RR, that is, each of the trees should contain all terminal nodes. In the edge-disjoint version of this problem, the trees have to be edge disjoint. In the element-disjoint version, the trees have to be node disjoint on non-terminal nodes and edge-disjoint on edges adjacent to terminals. We show that both problems are NP-hard when there are only 33 terminals. Our main focus is on planar instances of these problems. We show that the edge-disjoint version of the problem is NP-hard even in planar graphs with 33 terminals on the same face of the embedding. Next, we design an algorithm that achieves an approximation guarantee of 121k\frac{1}{2}-\frac{1}{k}, given a planar graph that is kk element-connected on the terminals; in fact, given such a graph the algorithm returns k/21k/2-1 element-disjoint Steiner trees. Using this algorithm we get an approximation algorithm with guarantee of (almost) 44 for the edge-disjoint version of the problem in planar graphs. We also show that the natural LP relaxation of the edge-disjoint Steiner Tree Packing problem has an integrality ratio of 2ϵ2-\epsilon in planar graphs

    Vertex sparsification and universal rounding algorithms

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 125-129).Suppose we are given a gigantic communication network, but are only interested in a small number of nodes (clients). There are many routing problems we could be asked to solve for our clients. Is there a much smaller network - that we could write down on a sheet of paper and put in our pocket - that approximately preserves all the relevant communication properties of the original network? As we will demonstrate, the answer to this question is YES, and we call this smaller network a vertex sparsifier. In fact, if we are asked to solve a sequence of optimization problems characterized by cuts or flows, we can compute a good vertex sparsifier ONCE and discard the original network. We can run our algorithms (or approximation algorithms) on the vertex sparsifier as a proxy - and still recover approximately optimal solutions in the original network. This novel pattern saves both space (because the network we store is much smaller) and time (because our algorithms run on a much smaller graph). Additionally, we apply these ideas to obtain a master theorem for graph partitioning problems - as long as the integrality gap of a standard linear programming relaxation is bounded on trees, then the integrality gap is at most a logarithmic factor larger for general networks. This result implies optimal bounds for many well studied graph partitioning problems as a special case, and even yields optimal bounds for more challenging problems that had not been studied before. Morally, these results are all based on the idea that even though the structure of optimal solutions can be quite complicated, these solution values can be approximated by crude (even linear) functions.by Ankur Moitra.Ph.D

    Learning-based Segmentation for Connectomics

    Get PDF
    Recent advances in electron microscopy techniques make it possible to acquire highresolution, isotropic volume images of neural circuitry. In connectomics, neuroscientists seek to obtain the circuit diagram involving all neurons and synapses in such a volume image. Mapping neuron connectivity requires tracing each and every neural process through terabytes of image data. Due to the size and complexity of these volume images, fully automated analysis methods are desperately needed. In this thesis, I consider automated, machine learning-based neurite segmentation approaches based on a simultaneous merge decision of adjacent supervoxels. - Given a learned likelihood of merging adjacent supervoxels, Chapter 4 adapts a probabilistic graphical model which ensures that merge decisions are consistent and the surfaces of final segments are closed. This model can be posed as a multicut optimization problem and is solved with the cutting-plane method. In order to scale to large datasets, a fast search for (and good choice of) violated cycle constraints is crucial. Quantitative experiments show that the proposed closed-surface regularization significantly improves segmentation performance. - In Chapter 5, I investigate whether the edge weights of the previous model can be chosen to minimize the loss with respect to non-local segmentation quality measures (e.g. Rand Index). Suitable w are obtained from a structured learning approach. In the Structured Support Vector Machine formulation, a novel fast enumeration scheme is used to find the most violated constraint. Quantitative experiments show that structured learning can improve upon unstructured methods. Furthermore, I introduce a new approximate, hierarchical and blockwise optimization approach for large-scale multicut segmentation. Using this method, high-quality approximate solutions for large problem instances are found quickly. - Chapter 6 introduces another novel approximate scheme for multicut segmentation -- Cut, Glue&Cut -- which is based on the move-making paradigm. First, the graph is recursively partitioned into small regions (cut phase). Then, for any two adjacent regions, alternative cuts of these two regions define possible moves (glue&cut phase). The proposed algorithm finds segmentations that are { as measured by a loss function { as close to the ground-truth as the global optimum found by exact solvers, while being significantly faster than existing methods. - In order to jointly label resulting segments as well as to label the boundaries between segments, Chapter 7 proposes the Asymmetric Multi-way Cut model, a variant of Multi-way Cut. In this new model, within-class cuts are allowed for some labels, while being forbidden for other labels. Qualitative experiments show when such a formulation can be beneficial. In particular, an application to joint neurite and cell organelle labeling in EM volume images is discussed. - Custom software tools that can cope with the large data volumes common in the field of connectomics are a prerequisite for the implementation and evaluation of novel segmentation techniques. Chapter 3 presents version 1.0 of ilastik, a joint effort of multiple researchers. I have co-written its volume viewing component, volumina. ilastik provides an interactive pixel classification work ow on largerthan-RAM datasets as well as a semi-automated segmentation module useful for acquiring gold standard segmentations. Furthermore, I describe new software for dealing with hierarchies of cell complexes as well as for blockwise image processing operations on large datasets. The different segmentation methods presented in this thesis provide a promising direction towards reaching the required reliability as well as the required data throughput necessary for connectomics applications
    corecore