18 research outputs found

    5/4-Approximation of Minimum 2-Edge-Connected Spanning Subgraph

    Full text link
    We provide a 5/45/4-approximation algorithm for the minimum 2-edge-connected spanning subgraph problem. This improves upon the previous best ratio of 4/34/3. The algorithm is based on applying local improvement steps on a starting solution provided by a standard ear decomposition together with the idea of running several iterations on residual graphs by excluding certain edges that do not belong to an optimum solution. The latter idea is a novel one, which allows us to bypass 33-ears with no loss in approximation ratio, the bottleneck for obtaining a performance guarantee below 3/23/2. Our algorithm also implies a simpler 7/47/4-approximation algorithm for the matching augmentation problem, which was recently treated.Comment: The modification of 5-ears, which was both erroneous and unnecessary, is omitte

    Scheme-theoretic Approach to Computational Complexity I. The Separation of P and NP

    Full text link
    We lay the foundations of a new theory for algorithms and computational complexity by parameterizing the instances of a computational problem as a moduli scheme. Considering the geometry of the scheme associated to 3-SAT, we separate P and NP.Comment: 11 pages, corrections upon the referee repor

    4/3-Approximation of Graphic TSP

    Full text link
    We describe a 43\frac{4}{3}-approximation algorithm for the traveling salesman problem in which the distances between points are induced by graph-theoretical distances in an unweighted graph. The algorithm is based on finding a minimum cost perfect matching on the odd degree vertices of a carefully computed 2-edge-connected spanning subgraph.Comment: 10 pages, decomposition specified more carefully, Lemma 3 (now Lemma 2) correcte

    Dual Growth with Variable Rates: An Improved Integrality Gap for Steiner Tree

    Full text link
    A promising approach for obtaining improved approximation algorithms for Steiner tree is to use the bidirected cut relaxation (BCR). The integrality gap of this relaxation is at least 36/3136/31, and it has long been conjectured that its true value is very close to this lower bound. However, the best upper bound for general graphs is still 22. With the aim of circumventing the asymmetric nature of BCR, Chakrabarty, Devanur and Vazirani [Math. Program., 130 (2011), pp. 1--32] introduced the simplex-embedding LP, which is equivalent to it. Using this, they gave a 2\sqrt{2}-approximation algorithm for quasi-bipartite graphs and showed that the integrality gap of the relaxation is at most 4/34/3 for this class of graphs. In this paper, we extend the approach provided by these authors and show that the integrality gap of BCR is at most 7/67/6 on quasi-bipartite graphs via a fast combinatorial algorithm. In doing so, we introduce a general technique, in particular a potentially widely applicable extension of the primal-dual schema. Roughly speaking, we apply the schema twice with variable rates of growth for the duals in the second phase, where the rates depend on the degrees of the duals computed in the first phase. This technique breaks the disadvantage of increasing dual variables in a monotone manner and creates a larger total dual value, thus presumably attaining the true integrality gap.Comment: A completely rewritten version of a previously retracted manuscript, using the simplex-embedding LP. The idea of growing duals with variable rates is still there. 23 pages, 7 figure

    Scheme-theoretic Approach to Computational Complexity II. The Separation of P and NP over C\mathbb{C}, R\mathbb{R}, and Z\mathbb{Z}

    Full text link
    We show that the problem of determining the feasibility of quadratic systems over C\mathbb{C}, R\mathbb{R}, and Z\mathbb{Z} requires exponential time. This separates P and NP over these fields/rings in the BCSS model of computation.Comment: 4 pages. arXiv admin note: text overlap with arXiv:2107.0738

    Quadrature Strategies for Constructing Polynomial Approximations

    Full text link
    Finding suitable points for multivariate polynomial interpolation and approximation is a challenging task. Yet, despite this challenge, there has been tremendous research dedicated to this singular cause. In this paper, we begin by reviewing classical methods for finding suitable quadrature points for polynomial approximation in both the univariate and multivariate setting. Then, we categorize recent advances into those that propose a new sampling approach and those centered on an optimization strategy. The sampling approaches yield a favorable discretization of the domain, while the optimization methods pick a subset of the discretized samples that minimize certain objectives. While not all strategies follow this two-stage approach, most do. Sampling techniques covered include subsampling quadratures, Christoffel, induced and Monte Carlo methods. Optimization methods discussed range from linear programming ideas and Newton's method to greedy procedures from numerical linear algebra. Our exposition is aided by examples that implement some of the aforementioned strategies

    Scheme-Theoretic Approach to Computational Complexity. IV. A New Perspective on Hardness of Approximation

    Full text link
    We provide a new approach for establishing hardness of approximation results, based on the theory recently introduced by the author. It allows one to directly show that approximating a problem beyond a certain threshold requires super-polynomial time. To exhibit the framework, we revisit two famous problems in this paper. The particular results we prove are: MAX-3-SAT(1,78+ϵ)(1,\frac{7}{8}+\epsilon) requires exponential time for any constant ϵ\epsilon satisfying 18≥ϵ>0\frac{1}{8} \geq \epsilon > 0. In particular, the gap exponential time hypothesis (Gap-ETH) holds. MAX-3-LIN-2(1−ϵ,12+ϵ)(1-\epsilon, \frac{1}{2}+\epsilon) requires exponential time for any constant ϵ\epsilon satisfying 14≥ϵ>0\frac{1}{4} \geq \epsilon > 0.Comment: 6 pages. arXiv admin note: text overlap with arXiv:2107.07387, arXiv:2305.0541

    On selecting a maximum volume sub-matrix of a matrix and related problems

    Get PDF
    AbstractGiven a matrix A∈Rm×n (n vectors in m dimensions), we consider the problem of selecting a subset of its columns such that its elements are as linearly independent as possible. This notion turned out to be important in low-rank approximations to matrices and rank revealing QR factorizations which have been investigated in the linear algebra community and can be quantified in a few different ways. In this paper, from a complexity theoretic point of view, we propose four related problems in which we try to find a sub-matrix C∈Rm×k of a given matrix A∈Rm×n such that (i) σmax(C) (the largest singular value of C) is minimum, (ii) σmin(C) (the smallest singular value of C) is maximum, (iii) κ(C)=σmax(C)/σmin(C) (the condition number of C) is minimum, and (iv) the volume of the parallelepiped defined by the column vectors of C is maximum. We establish the NP-hardness of these problems and further show that they do not admit PTAS. We then study a natural Greedy heuristic for the maximum volume problem and show that it has approximation ratio 2−O(klogk). Our analysis of the Greedy heuristic is tight to within a logarithmic factor in the exponent, which we show by explicitly constructing an instance for which the Greedy heuristic is 2−Ω(k) from optimal. When A has unit norm columns, a related problem is to select the maximum number of vectors with a given volume. We show that if the optimal solution selects k columns, then Greedy will select Ω(k/logk) columns, providing a logk approximation

    SSDE: Fast Graph Drawing Using Sampled Spectral Distance Embedding

    No full text
    We present a fast spectral graph drawing algorithm for drawing undirected connected graphs. Multi-Dimensional Scaling is a quadratic spectral algorithm, which approximates the real distances of the nodes in the final drawing with their graph theoretical distances. We build from this idea to develop a linear spectral graph drawing algorithm SSDE. We reduce the space and time complexity of the spectral decomposition by approximating the distance matrix with the product of three smaller matrices, which are formed by sampling rows and columns of the distance matrix. The main advantages of our algorithm are that it is very fast and it gives aesthetically pleasing results, when compared to other spectral graph drawing algorithms. The runtime for typical 10 5 node graphs is about one second and for 10 6 node graphs about ten seconds
    corecore