29 research outputs found
A subquadratic algorithm for 3XOR
Given a set of binary words of equal length , the 3XOR problem
asks for three elements such that , where denotes the bitwise XOR operation. The problem can be easily solved on
a word RAM with word length in time . Using Han's fast
integer sorting algorithm (2002/2004) this can be reduced to . With randomization or a sophisticated deterministic dictionary
construction, creating a hash table for with constant lookup time leads to
an algorithm with (expected) running time . At present, seemingly no
faster algorithms are known. We present a surprisingly simple deterministic,
quadratic time algorithm for 3XOR. Its core is a version of the Patricia trie
for , which makes it possible to traverse the set in ascending
order for arbitrary in linear time.
Furthermore, we describe a randomized algorithm for 3XOR with expected
running time . The
algorithm transfers techniques to our setting that were used by Baran, Demaine,
and P\u{a}tra\c{s}cu (2005/2008) for solving the related int3SUM problem (the
same problem with integer addition in place of binary XOR) in expected time
. As suggested by Jafargholi and Viola (2016), linear hash functions
are employed. The latter authors also showed that assuming 3XOR needs expected
running time one can prove conditional lower bounds for triangle
enumeration just as with 3SUM. We demonstrate that 3XOR can be reduced to other
problems as well, treating the examples offline SetDisjointness and offline
SetIntersection, which were studied for 3SUM by Kopelowitz, Pettie, and Porat
(2016)
On Multidimensional and Monotone k-SUM
The well-known k-SUM conjecture is that integer k-SUM requires time Omega(n^{ceil{k/2}-o(1)}). Recent work has studied multidimensional k-SUM in F_p^d, where the best known algorithm takes time tilde O(n^{ceil{k/2}}). Bhattacharyya et al. [ICS 2011] proved a min(2^{Omega(d)},n^{Omega(k)}) lower bound for k-SUM in F_p^d under the Exponential Time Hypothesis. We give a more refined lower bound under the standard k-SUM conjecture: for sufficiently large p, k-SUM in F_p^d requires time Omega(n^{k/2-o(1)}) if k is even, and Omega(n^{ceil{k/2}-2k(log k)/(log p)-o(1)}) if k is odd.
For a special case of the multidimensional problem, bounded monotone d-dimensional 3SUM, Chan and Lewenstein [STOC 2015] gave a surprising tilde O(n^{2-2/(d+13)}) algorithm using additive combinatorics. We show this algorithm is essentially optimal. To be more precise, bounded monotone d-dimensional 3SUM requires time Omega(n^{2-frac{4}{d}-o(1)}) under the standard 3SUM conjecture, and time Omega(n^{2-frac{2}{d}-o(1)}) under the so-called strong 3SUM conjecture. Thus, even though one might hope to further exploit the structural advantage of monotonicity, no substantial improvements beyond those obtained by Chan and Lewenstein are possible for bounded monotone d-dimensional 3SUM
Faster Algorithms for the Sparse Random 3XOR Problem
We present two new algorithms for a variant of the 3XOR problem with lists consisting of N n-bit 10 vectors whose coefficients are drawn randomly according to a Bernoulli distribution of parameter 11 p 0.13. The analysis of these algorithms reveal a "phase change" for a 16 certain threshold p. 17 2012 ACM Subject Classification Theory of computation → Computational complexity and cryp-18 tography; Theory of computation 1
Data Structures Meet Cryptography: 3SUM with Preprocessing
This paper shows several connections between data structure problems and
cryptography against preprocessing attacks. Our results span data structure
upper bounds, cryptographic applications, and data structure lower bounds, as
summarized next.
First, we apply Fiat--Naor inversion, a technique with cryptographic origins,
to obtain a data structure upper bound. In particular, our technique yields a
suite of algorithms with space and (online) time for a preprocessing
version of the -input 3SUM problem where .
This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is
no data structure that solves this problem for and for any constant .
Secondly, we show equivalence between lower bounds for a broad class of
(static) data structure problems and one-way functions in the random oracle
model that resist a very strong form of preprocessing attack. Concretely, given
a random function (accessed as an oracle) we show how to
compile it into a function which resists -bit
preprocessing attacks that run in query time where
(assuming a corresponding data structure lower bound
on 3SUM). In contrast, a classical result of Hellman tells us that itself
can be more easily inverted, say with -bit preprocessing in
time. We also show that much stronger lower bounds follow from the hardness of
kSUM. Our results can be equivalently interpreted as security against
adversaries that are very non-uniform, or have large auxiliary input, or as
security in the face of a powerfully backdoored random oracle.
Thirdly, we give non-adaptive lower bounds for 3SUM and a range of geometric
problems which match the best known lower bounds for static data structure
problems
On the Fine-Grained Complexity of Parity Problems
We consider the parity variants of basic problems studied in fine-grained complexity. We show that finding the exact solution is just as hard as finding its parity (i.e. if the solution is even or odd) for a large number of classical problems, including All-Pairs Shortest Paths (APSP), Diameter, Radius, Median, Second Shortest Path, Maximum Consecutive Subsums, Min-Plus Convolution, and 0/1-Knapsack.
A direct reduction from a problem to its parity version is often difficult to design. Instead, we revisit the existing hardness reductions and tailor them in a problem-specific way to the parity version. Nearly all reductions from APSP in the literature proceed via the (subcubic-equivalent but simpler) Negative Weight Triangle (NWT) problem. Our new modified reductions also start from NWT or a non-standard parity variant of it. We are not able to establish a subcubic-equivalence with the more natural parity counting variant of NWT, where we ask if the number of negative triangles is even or odd. Perhaps surprisingly, we justify this by designing a reduction from the seemingly-harder Zero Weight Triangle problem, showing that parity is (conditionally) strictly harder than decision for NWT
Threesomes, Degenerates, and Love Triangles
The 3SUM problem is to decide, given a set of real numbers, whether any
three sum to zero. It is widely conjectured that a trivial -time
algorithm is optimal and over the years the consequences of this conjecture
have been revealed. This 3SUM conjecture implies lower bounds on
numerous problems in computational geometry and a variant of the conjecture
implies strong lower bounds on triangle enumeration, dynamic graph algorithms,
and string matching data structures.
In this paper we refute the 3SUM conjecture. We prove that the decision tree
complexity of 3SUM is and give two subquadratic 3SUM
algorithms, a deterministic one running in
time and a randomized one running in time with
high probability. Our results lead directly to improved bounds for -variate
linear degeneracy testing for all odd . The problem is to decide, given
a linear function and a set , whether . We show the
decision tree complexity of this problem is .
Finally, we give a subcubic algorithm for a generalization of the
-product over real-valued matrices and apply it to the problem of
finding zero-weight triangles in weighted graphs. We give a
depth- decision tree for this problem, as well as an
algorithm running in time
Clustered Integer 3SUM via Additive Combinatorics
We present a collection of new results on problems related to 3SUM,
including:
1. The first truly subquadratic algorithm for
1a. computing the (min,+) convolution for monotone increasing
sequences with integer values bounded by ,
1b. solving 3SUM for monotone sets in 2D with integer coordinates
bounded by , and
1c. preprocessing a binary string for histogram indexing (also
called jumbled indexing).
The running time is:
with
randomization, or deterministically. This greatly improves the
previous time bound obtained from Williams'
recent result on all-pairs shortest paths [STOC'14], and answers an open
question raised by several researchers studying the histogram indexing problem.
2. The first algorithm for histogram indexing for any constant alphabet size
that achieves truly subquadratic preprocessing time and truly sublinear query
time.
3. A truly subquadratic algorithm for integer 3SUM in the case when the given
set can be partitioned into clusters each covered by an interval
of length , for any constant .
4. An algorithm to preprocess any set of integers so that subsequently
3SUM on any given subset can be solved in
time.
All these results are obtained by a surprising new technique, based on the
Balog--Szemer\'edi--Gowers Theorem from additive combinatorics
On Nondeterministic Derandomization of Freivalds\u27 Algorithm: Consequences, Avenues and Algorithmic Progress
Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two n x n matrices can be performed in near-optimal nondeterministic time O~(n^2). Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time O(n^2), our question is a relaxation of the open problem of derandomizing Freivalds\u27 algorithm.
We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and n erroneous entries can be performed in time O~(n^2) - interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors.
Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most t errors in time O~(sqrt{t} n^2 + t^2). To obtain this result, we show how to compute an integer matrix product with at most t nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for t = Omega(n^{2/3}) nonzeroes, which is of independent interest