11,463 research outputs found

    On largest volume simplices and sub-determinants

    Full text link
    We show that the problem of finding the simplex of largest volume in the convex hull of nn points in Qd\mathbb{Q}^d can be approximated with a factor of O(logd)d/2O(\log d)^{d/2} in polynomial time. This improves upon the previously best known approximation guarantee of d(d1)/2d^{(d-1)/2} by Khachiyan. On the other hand, we show that there exists a constant c>1c>1 such that this problem cannot be approximated with a factor of cdc^d, unless P=NPP=NP. % This improves over the 1.091.09 inapproximability that was previously known. Our hardness result holds even if n=O(d)n = O(d), in which case there exists a \bar c\,^{d}-approximation algorithm that relies on recent sampling techniques, where cˉ\bar c is again a constant. We show that similar results hold for the problem of finding the largest absolute value of a subdeterminant of a d×nd\times n matrix

    Pre-Reduction Graph Products: Hardnesses of Properly Learning DFAs and Approximating EDP on DAGs

    Full text link
    The study of graph products is a major research topic and typically concerns the term f(GH)f(G*H), e.g., to show that f(GH)=f(G)f(H)f(G*H)=f(G)f(H). In this paper, we study graph products in a non-standard form f(R[GH]f(R[G*H] where RR is a "reduction", a transformation of any graph into an instance of an intended optimization problem. We resolve some open problems as applications. (1) A tight n1ϵn^{1-\epsilon}-approximation hardness for the minimum consistent deterministic finite automaton (DFA) problem, where nn is the sample size. Due to Board and Pitt [Theoretical Computer Science 1992], this implies the hardness of properly learning DFAs assuming NPRPNP\neq RP (the weakest possible assumption). (2) A tight n1/2ϵn^{1/2-\epsilon} hardness for the edge-disjoint paths (EDP) problem on directed acyclic graphs (DAGs), where nn denotes the number of vertices. (3) A tight hardness of packing vertex-disjoint kk-cycles for large kk. (4) An alternative (and perhaps simpler) proof for the hardness of properly learning DNF, CNF and intersection of halfspaces [Alekhnovich et al., FOCS 2004 and J. Comput.Syst.Sci. 2008]

    Improved approximation for 3-dimensional matching via bounded pathwidth local search

    Full text link
    One of the most natural optimization problems is the k-Set Packing problem, where given a family of sets of size at most k one should select a maximum size subfamily of pairwise disjoint sets. A special case of 3-Set Packing is the well known 3-Dimensional Matching problem. Both problems belong to the Karp`s list of 21 NP-complete problems. The best known polynomial time approximation ratio for k-Set Packing is (k + eps)/2 and goes back to the work of Hurkens and Schrijver [SIDMA`89], which gives (1.5 + eps)-approximation for 3-Dimensional Matching. Those results are obtained by a simple local search algorithm, that uses constant size swaps. The main result of the paper is a new approach to local search for k-Set Packing where only a special type of swaps is considered, which we call swaps of bounded pathwidth. We show that for a fixed value of k one can search the space of r-size swaps of constant pathwidth in c^r poly(|F|) time. Moreover we present an analysis proving that a local search maximum with respect to O(log |F|)-size swaps of constant pathwidth yields a polynomial time (k + 1 + eps)/3-approximation algorithm, improving the best known approximation ratio for k-Set Packing. In particular we improve the approximation ratio for 3-Dimensional Matching from 3/2 + eps to 4/3 + eps.Comment: To appear in proceedings of FOCS 201

    On the Combinatorial Complexity of Approximating Polytopes

    Get PDF
    Approximating convex bodies succinctly by convex polytopes is a fundamental problem in discrete geometry. A convex body KK of diameter diam(K)\mathrm{diam}(K) is given in Euclidean dd-dimensional space, where dd is a constant. Given an error parameter ε>0\varepsilon > 0, the objective is to determine a polytope of minimum combinatorial complexity whose Hausdorff distance from KK is at most εdiam(K)\varepsilon \cdot \mathrm{diam}(K). By combinatorial complexity we mean the total number of faces of all dimensions of the polytope. A well-known result by Dudley implies that O(1/ε(d1)/2)O(1/\varepsilon^{(d-1)/2}) facets suffice, and a dual result by Bronshteyn and Ivanov similarly bounds the number of vertices, but neither result bounds the total combinatorial complexity. We show that there exists an approximating polytope whose total combinatorial complexity is O~(1/ε(d1)/2)\tilde{O}(1/\varepsilon^{(d-1)/2}), where O~\tilde{O} conceals a polylogarithmic factor in 1/ε1/\varepsilon. This is a significant improvement upon the best known bound, which is roughly O(1/εd2)O(1/\varepsilon^{d-2}). Our result is based on a novel combination of both old and new ideas. First, we employ Macbeath regions, a classical structure from the theory of convexity. The construction of our approximating polytope employs a new stratified placement of these regions. Second, in order to analyze the combinatorial complexity of the approximating polytope, we present a tight analysis of a width-based variant of B\'{a}r\'{a}ny and Larman's economical cap covering. Finally, we use a deterministic adaptation of the witness-collector technique (developed recently by Devillers et al.) in the context of our stratified construction.Comment: In Proceedings of the 32nd International Symposium Computational Geometry (SoCG 2016) and accepted to SoCG 2016 special issue of Discrete and Computational Geometr

    Learning to Approximate a Bregman Divergence

    Full text link
    Bregman divergences generalize measures such as the squared Euclidean distance and the KL divergence, and arise throughout many areas of machine learning. In this paper, we focus on the problem of approximating an arbitrary Bregman divergence from supervision, and we provide a well-principled approach to analyzing such approximations. We develop a formulation and algorithm for learning arbitrary Bregman divergences based on approximating their underlying convex generating function via a piecewise linear function. We provide theoretical approximation bounds using our parameterization and show that the generalization error Op(m1/2)O_p(m^{-1/2}) for metric learning using our framework matches the known generalization error in the strictly less general Mahalanobis metric learning setting. We further demonstrate empirically that our method performs well in comparison to existing metric learning methods, particularly for clustering and ranking problems.Comment: 19 pages, 4 figure

    Fast and Deterministic Approximations for k-Cut

    Get PDF
    In an undirected graph, a k-cut is a set of edges whose removal breaks the graph into at least k connected components. The minimum weight k-cut can be computed in n^O(k) time, but when k is treated as part of the input, computing the minimum weight k-cut is NP-Hard [Goldschmidt and Hochbaum, 1994]. For poly(m,n,k)-time algorithms, the best possible approximation factor is essentially 2 under the small set expansion hypothesis [Manurangsi, 2017]. Saran and Vazirani [1995] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed via O(k) minimum cuts, which implies a O~(km) randomized running time via the nearly linear time randomized min-cut algorithm of Karger [2000]. Nagamochi and Kamidoi [2007] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed deterministically in O(mn + n^2 log n) time. These results prompt two basic questions. The first concerns the role of randomization. Is there a deterministic algorithm for 2-approximate k-cuts matching the randomized running time of O~(km)? The second question qualitatively compares minimum cut to 2-approximate minimum k-cut. Can 2-approximate k-cuts be computed as fast as the minimum cut - in O~(m) randomized time? We give a deterministic approximation algorithm that computes (2 + eps)-minimum k-cuts in O(m log^3 n / eps^2) time, via a (1 + eps)-approximation for an LP relaxation of k-cut
    corecore