350 research outputs found
Improved Bounds for 3SUM, -SUM, and Linear Degeneracy
Given a set of real numbers, the 3SUM problem is to decide whether there
are three of them that sum to zero. Until a recent breakthrough by Gr{\o}nlund
and Pettie [FOCS'14], a simple -time deterministic algorithm for
this problem was conjectured to be optimal. Over the years many algorithmic
problems have been shown to be reducible from the 3SUM problem or its variants,
including the more generalized forms of the problem, such as -SUM and
-variate linear degeneracy testing (-LDT). The conjectured hardness of
these problems have become extremely popular for basing conditional lower
bounds for numerous algorithmic problems in P.
In this paper, we show that the randomized -linear decision tree
complexity of 3SUM is , and that the randomized -linear
decision tree complexity of -SUM and -LDT is , for any odd
. These bounds improve (albeit randomized) the corresponding
and decision tree bounds
obtained by Gr{\o}nlund and Pettie. Our technique includes a specialized
randomized variant of fractional cascading data structure. Additionally, we
give another deterministic algorithm for 3SUM that runs in time. The latter bound matches a recent independent bound by Freund
[Algorithmica 2017], but our algorithm is somewhat simpler, due to a better use
of word-RAM model
Threesomes, Degenerates, and Love Triangles
The 3SUM problem is to decide, given a set of real numbers, whether any
three sum to zero. It is widely conjectured that a trivial -time
algorithm is optimal and over the years the consequences of this conjecture
have been revealed. This 3SUM conjecture implies lower bounds on
numerous problems in computational geometry and a variant of the conjecture
implies strong lower bounds on triangle enumeration, dynamic graph algorithms,
and string matching data structures.
In this paper we refute the 3SUM conjecture. We prove that the decision tree
complexity of 3SUM is and give two subquadratic 3SUM
algorithms, a deterministic one running in
time and a randomized one running in time with
high probability. Our results lead directly to improved bounds for -variate
linear degeneracy testing for all odd . The problem is to decide, given
a linear function and a set , whether . We show the
decision tree complexity of this problem is .
Finally, we give a subcubic algorithm for a generalization of the
-product over real-valued matrices and apply it to the problem of
finding zero-weight triangles in weighted graphs. We give a
depth- decision tree for this problem, as well as an
algorithm running in time
Subquadratic Weighted Matroid Intersection Under Rank Oracles
Given two matroids ?? = (V, ??) and ?? = (V, ??) over an n-element integer-weighted ground set V, the weighted matroid intersection problem aims to find a common independent set S^* ? ?? ? ?? maximizing the weight of S^*. In this paper, we present a simple deterministic algorithm for weighted matroid intersection using O?(nr^{3/4} log{W}) rank queries, where r is the size of the largest intersection of ?? and ?? and W is the maximum weight. This improves upon the best previously known O?(nr log{W}) algorithm given by Lee, Sidford, and Wong [FOCS\u2715], and is the first subquadratic algorithm for polynomially-bounded weights under the standard independence or rank oracle models. The main contribution of this paper is an efficient algorithm that computes shortest-path trees in weighted exchange graphs
Obstructions to Faster Diameter Computation: Asteroidal Sets
Full version of an IPEC'22 paperAn extremity is a vertex such that the removal of its closed neighbourhood does not increase the number of connected components. Let be the class of all connected graphs whose quotient graph obtained from modular decomposition contains no more than pairwise nonadjacent extremities. Our main contributions are as follows. First, we prove that the diameter of every -edge graph in can be computed in deterministic time. We then improve the runtime to linear for all graphs with bounded clique-number. Furthermore, we can compute an additive -approximation of all vertex eccentricities in deterministic time. This is in sharp contrast with general -edge graphs for which, under the Strong Exponential Time Hypothesis (SETH), one cannot compute the diameter in time for any . As important special cases of our main result, we derive an -time algorithm for exact diameter computation within dominating pair graphs of diameter at least six, and an -time algorithm for this problem on graphs of asteroidal number at most . We end up presenting an improved algorithm for chordal graphs of bounded asteroidal number, and a partial extension of our results to the larger class of all graphs with a dominating target of bounded cardinality. Our time upper bounds in the paper are shown to be essentially optimal under plausible complexity assumptions
Hardness of Approximate Nearest Neighbor Search
We prove conditional near-quadratic running time lower bounds for approximate
Bichromatic Closest Pair with Euclidean, Manhattan, Hamming, or edit distance.
Specifically, unless the Strong Exponential Time Hypothesis (SETH) is false,
for every there exists a constant such that computing a
-approximation to the Bichromatic Closest Pair requires
time. In particular, this implies a near-linear query time for
Approximate Nearest Neighbor search with polynomial preprocessing time.
Our reduction uses the Distributed PCP framework of [ARW'17], but obtains
improved efficiency using Algebraic Geometry (AG) codes. Efficient PCPs from AG
codes have been constructed in other settings before [BKKMS'16, BCGRS'17], but
our construction is the first to yield new hardness results
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
- …