29 research outputs found

    A subquadratic algorithm for 3XOR

    Get PDF
    Given a set XX of nn binary words of equal length ww, the 3XOR problem asks for three elements a,b,cXa, b, c \in X such that ab=ca \oplus b=c, where \oplus denotes the bitwise XOR operation. The problem can be easily solved on a word RAM with word length ww in time O(n2logn)O(n^2 \log{n}). Using Han's fast integer sorting algorithm (2002/2004) this can be reduced to O(n2loglogn)O(n^2 \log{\log{n}}). With randomization or a sophisticated deterministic dictionary construction, creating a hash table for XX with constant lookup time leads to an algorithm with (expected) running time O(n2)O(n^2). At present, seemingly no faster algorithms are known. We present a surprisingly simple deterministic, quadratic time algorithm for 3XOR. Its core is a version of the Patricia trie for XX, which makes it possible to traverse the set aXa \oplus X in ascending order for arbitrary a{0,1}wa\in \{0, 1\}^{w} in linear time. Furthermore, we describe a randomized algorithm for 3XOR with expected running time O(n2min{log3w/w,(loglogn)2/log2n})O(n^2\cdot\min\{\log^3{w}/w, (\log\log{n})^2/\log^2 n\}). The algorithm transfers techniques to our setting that were used by Baran, Demaine, and P\u{a}tra\c{s}cu (2005/2008) for solving the related int3SUM problem (the same problem with integer addition in place of binary XOR) in expected time o(n2)o(n^2). As suggested by Jafargholi and Viola (2016), linear hash functions are employed. The latter authors also showed that assuming 3XOR needs expected running time n2o(1)n^{2-o(1)} one can prove conditional lower bounds for triangle enumeration just as with 3SUM. We demonstrate that 3XOR can be reduced to other problems as well, treating the examples offline SetDisjointness and offline SetIntersection, which were studied for 3SUM by Kopelowitz, Pettie, and Porat (2016)

    On Multidimensional and Monotone k-SUM

    Get PDF
    The well-known k-SUM conjecture is that integer k-SUM requires time Omega(n^{ceil{k/2}-o(1)}). Recent work has studied multidimensional k-SUM in F_p^d, where the best known algorithm takes time tilde O(n^{ceil{k/2}}). Bhattacharyya et al. [ICS 2011] proved a min(2^{Omega(d)},n^{Omega(k)}) lower bound for k-SUM in F_p^d under the Exponential Time Hypothesis. We give a more refined lower bound under the standard k-SUM conjecture: for sufficiently large p, k-SUM in F_p^d requires time Omega(n^{k/2-o(1)}) if k is even, and Omega(n^{ceil{k/2}-2k(log k)/(log p)-o(1)}) if k is odd. For a special case of the multidimensional problem, bounded monotone d-dimensional 3SUM, Chan and Lewenstein [STOC 2015] gave a surprising tilde O(n^{2-2/(d+13)}) algorithm using additive combinatorics. We show this algorithm is essentially optimal. To be more precise, bounded monotone d-dimensional 3SUM requires time Omega(n^{2-frac{4}{d}-o(1)}) under the standard 3SUM conjecture, and time Omega(n^{2-frac{2}{d}-o(1)}) under the so-called strong 3SUM conjecture. Thus, even though one might hope to further exploit the structural advantage of monotonicity, no substantial improvements beyond those obtained by Chan and Lewenstein are possible for bounded monotone d-dimensional 3SUM

    Faster Algorithms for the Sparse Random 3XOR Problem

    Get PDF
    We present two new algorithms for a variant of the 3XOR problem with lists consisting of N n-bit 10 vectors whose coefficients are drawn randomly according to a Bernoulli distribution of parameter 11 p 0.13. The analysis of these algorithms reveal a "phase change" for a 16 certain threshold p. 17 2012 ACM Subject Classification Theory of computation → Computational complexity and cryp-18 tography; Theory of computation 1

    Data Structures Meet Cryptography: 3SUM with Preprocessing

    Full text link
    This paper shows several connections between data structure problems and cryptography against preprocessing attacks. Our results span data structure upper bounds, cryptographic applications, and data structure lower bounds, as summarized next. First, we apply Fiat--Naor inversion, a technique with cryptographic origins, to obtain a data structure upper bound. In particular, our technique yields a suite of algorithms with space SS and (online) time TT for a preprocessing version of the NN-input 3SUM problem where S3T=O~(N6)S^3\cdot T = \widetilde{O}(N^6). This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is no data structure that solves this problem for S=N2δS=N^{2-\delta} and T=N1δT = N^{1-\delta} for any constant δ>0\delta>0. Secondly, we show equivalence between lower bounds for a broad class of (static) data structure problems and one-way functions in the random oracle model that resist a very strong form of preprocessing attack. Concretely, given a random function F:[N][N]F: [N] \to [N] (accessed as an oracle) we show how to compile it into a function GF:[N2][N2]G^F: [N^2] \to [N^2] which resists SS-bit preprocessing attacks that run in query time TT where ST=O(N2ε)ST=O(N^{2-\varepsilon}) (assuming a corresponding data structure lower bound on 3SUM). In contrast, a classical result of Hellman tells us that FF itself can be more easily inverted, say with N2/3N^{2/3}-bit preprocessing in N2/3N^{2/3} time. We also show that much stronger lower bounds follow from the hardness of kSUM. Our results can be equivalently interpreted as security against adversaries that are very non-uniform, or have large auxiliary input, or as security in the face of a powerfully backdoored random oracle. Thirdly, we give non-adaptive lower bounds for 3SUM and a range of geometric problems which match the best known lower bounds for static data structure problems

    On the Fine-Grained Complexity of Parity Problems

    Get PDF
    We consider the parity variants of basic problems studied in fine-grained complexity. We show that finding the exact solution is just as hard as finding its parity (i.e. if the solution is even or odd) for a large number of classical problems, including All-Pairs Shortest Paths (APSP), Diameter, Radius, Median, Second Shortest Path, Maximum Consecutive Subsums, Min-Plus Convolution, and 0/1-Knapsack. A direct reduction from a problem to its parity version is often difficult to design. Instead, we revisit the existing hardness reductions and tailor them in a problem-specific way to the parity version. Nearly all reductions from APSP in the literature proceed via the (subcubic-equivalent but simpler) Negative Weight Triangle (NWT) problem. Our new modified reductions also start from NWT or a non-standard parity variant of it. We are not able to establish a subcubic-equivalence with the more natural parity counting variant of NWT, where we ask if the number of negative triangles is even or odd. Perhaps surprisingly, we justify this by designing a reduction from the seemingly-harder Zero Weight Triangle problem, showing that parity is (conditionally) strictly harder than decision for NWT

    Threesomes, Degenerates, and Love Triangles

    Full text link
    The 3SUM problem is to decide, given a set of nn real numbers, whether any three sum to zero. It is widely conjectured that a trivial O(n2)O(n^2)-time algorithm is optimal and over the years the consequences of this conjecture have been revealed. This 3SUM conjecture implies Ω(n2)\Omega(n^2) lower bounds on numerous problems in computational geometry and a variant of the conjecture implies strong lower bounds on triangle enumeration, dynamic graph algorithms, and string matching data structures. In this paper we refute the 3SUM conjecture. We prove that the decision tree complexity of 3SUM is O(n3/2logn)O(n^{3/2}\sqrt{\log n}) and give two subquadratic 3SUM algorithms, a deterministic one running in O(n2/(logn/loglogn)2/3)O(n^2 / (\log n/\log\log n)^{2/3}) time and a randomized one running in O(n2(loglogn)2/logn)O(n^2 (\log\log n)^2 / \log n) time with high probability. Our results lead directly to improved bounds for kk-variate linear degeneracy testing for all odd k3k\ge 3. The problem is to decide, given a linear function f(x1,,xk)=α0+1ikαixif(x_1,\ldots,x_k) = \alpha_0 + \sum_{1\le i\le k} \alpha_i x_i and a set ARA \subset \mathbb{R}, whether 0f(Ak)0\in f(A^k). We show the decision tree complexity of this problem is O(nk/2logn)O(n^{k/2}\sqrt{\log n}). Finally, we give a subcubic algorithm for a generalization of the (min,+)(\min,+)-product over real-valued matrices and apply it to the problem of finding zero-weight triangles in weighted graphs. We give a depth-O(n5/2logn)O(n^{5/2}\sqrt{\log n}) decision tree for this problem, as well as an algorithm running in time O(n3(loglogn)2/logn)O(n^3 (\log\log n)^2/\log n)

    Clustered Integer 3SUM via Additive Combinatorics

    Full text link
    We present a collection of new results on problems related to 3SUM, including: 1. The first truly subquadratic algorithm for      \ \ \ \ \ 1a. computing the (min,+) convolution for monotone increasing sequences with integer values bounded by O(n)O(n),      \ \ \ \ \ 1b. solving 3SUM for monotone sets in 2D with integer coordinates bounded by O(n)O(n), and      \ \ \ \ \ 1c. preprocessing a binary string for histogram indexing (also called jumbled indexing). The running time is: O(n(9+177)/12polylogn)=O(n1.859)O(n^{(9+\sqrt{177})/12}\,\textrm{polylog}\,n)=O(n^{1.859}) with randomization, or O(n1.864)O(n^{1.864}) deterministically. This greatly improves the previous n2/2Ω(logn)n^2/2^{\Omega(\sqrt{\log n})} time bound obtained from Williams' recent result on all-pairs shortest paths [STOC'14], and answers an open question raised by several researchers studying the histogram indexing problem. 2. The first algorithm for histogram indexing for any constant alphabet size that achieves truly subquadratic preprocessing time and truly sublinear query time. 3. A truly subquadratic algorithm for integer 3SUM in the case when the given set can be partitioned into n1δn^{1-\delta} clusters each covered by an interval of length nn, for any constant δ>0\delta>0. 4. An algorithm to preprocess any set of nn integers so that subsequently 3SUM on any given subset can be solved in O(n13/7polylogn)O(n^{13/7}\,\textrm{polylog}\,n) time. All these results are obtained by a surprising new technique, based on the Balog--Szemer\'edi--Gowers Theorem from additive combinatorics

    On Nondeterministic Derandomization of Freivalds\u27 Algorithm: Consequences, Avenues and Algorithmic Progress

    Get PDF
    Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two n x n matrices can be performed in near-optimal nondeterministic time O~(n^2). Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time O(n^2), our question is a relaxation of the open problem of derandomizing Freivalds\u27 algorithm. We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and n erroneous entries can be performed in time O~(n^2) - interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors. Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most t errors in time O~(sqrt{t} n^2 + t^2). To obtain this result, we show how to compute an integer matrix product with at most t nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for t = Omega(n^{2/3}) nonzeroes, which is of independent interest
    corecore