350 research outputs found

    Improved Bounds for 3SUM, kk-SUM, and Linear Degeneracy

    Get PDF
    Given a set of nn real numbers, the 3SUM problem is to decide whether there are three of them that sum to zero. Until a recent breakthrough by Gr{\o}nlund and Pettie [FOCS'14], a simple Θ(n2)\Theta(n^2)-time deterministic algorithm for this problem was conjectured to be optimal. Over the years many algorithmic problems have been shown to be reducible from the 3SUM problem or its variants, including the more generalized forms of the problem, such as kk-SUM and kk-variate linear degeneracy testing (kk-LDT). The conjectured hardness of these problems have become extremely popular for basing conditional lower bounds for numerous algorithmic problems in P. In this paper, we show that the randomized 44-linear decision tree complexity of 3SUM is O(n3/2)O(n^{3/2}), and that the randomized (2k2)(2k-2)-linear decision tree complexity of kk-SUM and kk-LDT is O(nk/2)O(n^{k/2}), for any odd k3k\ge 3. These bounds improve (albeit randomized) the corresponding O(n3/2logn)O(n^{3/2}\sqrt{\log n}) and O(nk/2logn)O(n^{k/2}\sqrt{\log n}) decision tree bounds obtained by Gr{\o}nlund and Pettie. Our technique includes a specialized randomized variant of fractional cascading data structure. Additionally, we give another deterministic algorithm for 3SUM that runs in O(n2loglogn/logn)O(n^2 \log\log n / \log n ) time. The latter bound matches a recent independent bound by Freund [Algorithmica 2017], but our algorithm is somewhat simpler, due to a better use of word-RAM model

    Threesomes, Degenerates, and Love Triangles

    Full text link
    The 3SUM problem is to decide, given a set of nn real numbers, whether any three sum to zero. It is widely conjectured that a trivial O(n2)O(n^2)-time algorithm is optimal and over the years the consequences of this conjecture have been revealed. This 3SUM conjecture implies Ω(n2)\Omega(n^2) lower bounds on numerous problems in computational geometry and a variant of the conjecture implies strong lower bounds on triangle enumeration, dynamic graph algorithms, and string matching data structures. In this paper we refute the 3SUM conjecture. We prove that the decision tree complexity of 3SUM is O(n3/2logn)O(n^{3/2}\sqrt{\log n}) and give two subquadratic 3SUM algorithms, a deterministic one running in O(n2/(logn/loglogn)2/3)O(n^2 / (\log n/\log\log n)^{2/3}) time and a randomized one running in O(n2(loglogn)2/logn)O(n^2 (\log\log n)^2 / \log n) time with high probability. Our results lead directly to improved bounds for kk-variate linear degeneracy testing for all odd k3k\ge 3. The problem is to decide, given a linear function f(x1,,xk)=α0+1ikαixif(x_1,\ldots,x_k) = \alpha_0 + \sum_{1\le i\le k} \alpha_i x_i and a set ARA \subset \mathbb{R}, whether 0f(Ak)0\in f(A^k). We show the decision tree complexity of this problem is O(nk/2logn)O(n^{k/2}\sqrt{\log n}). Finally, we give a subcubic algorithm for a generalization of the (min,+)(\min,+)-product over real-valued matrices and apply it to the problem of finding zero-weight triangles in weighted graphs. We give a depth-O(n5/2logn)O(n^{5/2}\sqrt{\log n}) decision tree for this problem, as well as an algorithm running in time O(n3(loglogn)2/logn)O(n^3 (\log\log n)^2/\log n)

    Subquadratic Weighted Matroid Intersection Under Rank Oracles

    Get PDF
    Given two matroids ?? = (V, ??) and ?? = (V, ??) over an n-element integer-weighted ground set V, the weighted matroid intersection problem aims to find a common independent set S^* ? ?? ? ?? maximizing the weight of S^*. In this paper, we present a simple deterministic algorithm for weighted matroid intersection using O?(nr^{3/4} log{W}) rank queries, where r is the size of the largest intersection of ?? and ?? and W is the maximum weight. This improves upon the best previously known O?(nr log{W}) algorithm given by Lee, Sidford, and Wong [FOCS\u2715], and is the first subquadratic algorithm for polynomially-bounded weights under the standard independence or rank oracle models. The main contribution of this paper is an efficient algorithm that computes shortest-path trees in weighted exchange graphs

    Improved Algebraic Degeneracy Testing

    Get PDF

    Obstructions to Faster Diameter Computation: Asteroidal Sets

    Get PDF
    Full version of an IPEC'22 paperAn extremity is a vertex such that the removal of its closed neighbourhood does not increase the number of connected components. Let ExtαExt_{\alpha} be the class of all connected graphs whose quotient graph obtained from modular decomposition contains no more than α\alpha pairwise nonadjacent extremities. Our main contributions are as follows. First, we prove that the diameter of every mm-edge graph in ExtαExt_{\alpha} can be computed in deterministic O(α3m3/2){\cal O}(\alpha^3 m^{3/2}) time. We then improve the runtime to linear for all graphs with bounded clique-number. Furthermore, we can compute an additive +1+1-approximation of all vertex eccentricities in deterministic O(α2m){\cal O}(\alpha^2 m) time. This is in sharp contrast with general mm-edge graphs for which, under the Strong Exponential Time Hypothesis (SETH), one cannot compute the diameter in O(m2ϵ){\cal O}(m^{2-\epsilon}) time for any ϵ>0\epsilon > 0. As important special cases of our main result, we derive an O(m3/2){\cal O}(m^{3/2})-time algorithm for exact diameter computation within dominating pair graphs of diameter at least six, and an O(k3m3/2){\cal O}(k^3m^{3/2})-time algorithm for this problem on graphs of asteroidal number at most kk. We end up presenting an improved algorithm for chordal graphs of bounded asteroidal number, and a partial extension of our results to the larger class of all graphs with a dominating target of bounded cardinality. Our time upper bounds in the paper are shown to be essentially optimal under plausible complexity assumptions

    Hardness of Approximate Nearest Neighbor Search

    Full text link
    We prove conditional near-quadratic running time lower bounds for approximate Bichromatic Closest Pair with Euclidean, Manhattan, Hamming, or edit distance. Specifically, unless the Strong Exponential Time Hypothesis (SETH) is false, for every δ>0\delta>0 there exists a constant ϵ>0\epsilon>0 such that computing a (1+ϵ)(1+\epsilon)-approximation to the Bichromatic Closest Pair requires n2δn^{2-\delta} time. In particular, this implies a near-linear query time for Approximate Nearest Neighbor search with polynomial preprocessing time. Our reduction uses the Distributed PCP framework of [ARW'17], but obtains improved efficiency using Algebraic Geometry (AG) codes. Efficient PCPs from AG codes have been constructed in other settings before [BKKMS'16, BCGRS'17], but our construction is the first to yield new hardness results

    Optimization with Sparsity-Inducing Penalties

    Get PDF
    Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted 2\ell_2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view
    corecore