78 research outputs found
On Nondeterministic Derandomization of {F}reivalds' Algorithm: {C}onsequences, Avenues and Algorithmic Progress
Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two matrices can be performed in near-optimal nondeterministic time . Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time , our question is a relaxation of the open problem of derandomizing Freivalds' algorithm. We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and erroneous entries can be performed in time -- interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors. Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most errors in time . To obtain this result, we show how to compute an integer matrix product with at most nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for nonzeroes, which is of independent interest
Polygon Placement Revisited: (Degree of Freedom + 1)-SUM Hardness and an Improvement via Offline Dynamic Rectangle Union
We revisit the classical problem of determining the largest copy of a simple polygon that can be placed into a simple polygon . Despite significant effort, known algorithms require high polynomial running times. (Barequet and Har-Peled, 2001) give a lower bound of under the 3SUM conjecture when and are (convex) polygons with vertices each. This leaves open whether we can establish (1) hardness beyond quadratic time and (2) any superlinear bound for constant-sized or . In this paper, we affirmatively answer these questions under the SUM conjecture, proving natural hardness results that increase with each degree of freedom (scaling, -translation, -translation, rotation): (1) Finding the largest copy of that can be -translated into requires time under the 3SUM conjecture. (2) Finding the largest copy of that can be arbitrarily translated into requires time under the 4SUM conjecture. (3) The above lower bounds are almost tight when one of the polygons is of constant size: we obtain an -time algorithm for orthogonal polygons with and vertices, respectively. (4) Finding the largest copy of that can be arbitrarily rotated and translated into requires time under the 5SUM conjecture. We are not aware of any other such natural degree of freedom -SUM hardness for a geometric optimization problem
Finding Small Satisfying Assignments Faster Than Brute Force: {A} Fine-grained Perspective into {B}oolean Constraint Satisfaction
To study the question under which circumstances small solutions can be found faster than by exhaustive search (and by how much), we study the fine-grained complexity of Boolean constraint satisfaction with size constraint exactly . More precisely, we aim to determine, for any finite constraint family, the optimal running time required to find satisfying assignments that set precisely of the variables to . Under central hardness assumptions on detecting cliques in graphs and 3-uniform hypergraphs, we give an almost tight characterization of into four regimes: (1) Brute force is essentially best-possible, i.e., , (2) the best algorithms are as fast as current -clique algorithms, i.e., , (3) the exponent has sublinear dependence on with , or (4) the problem is fixed-parameter tractable, i.e., . This yields a more fine-grained perspective than a previous FPT/W[1]-hardness dichotomy (Marx, Computational Complexity 2005). Our most interesting technical contribution is a -time algorithm for SubsetSum with precedence constraints parameterized by the target -- particularly the approach, based on generalizing a bound on the Frobenius coin problem to a setting with precedence constraints, might be of independent interest
Multivariate Fine-Grained Complexity of Longest Common Subsequence
We revisit the classic combinatorial pattern matching problem of finding a longest common subsequence (LCS). For strings and of length , a textbook algorithm solves LCS in time , but although much effort has been spent, no -time algorithm is known. Recent work indeed shows that such an algorithm would refute the Strong Exponential Time Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann, K\"unnemann FOCS'15]. Despite the quadratic-time barrier, for over 40 years an enduring scientific interest continued to produce fast algorithms for LCS and its variations. Particular attention was put into identifying and exploiting input parameters that yield strongly subquadratic time algorithms for special cases of interest, e.g., differential file comparison. This line of research was successfully pursued until 1990, at which time significant improvements came to a halt. In this paper, using the lens of fine-grained complexity, our goal is to (1) justify the lack of further improvements and (2) determine whether some special cases of LCS admit faster algorithms than currently known. To this end, we provide a systematic study of the multivariate complexity of LCS, taking into account all parameters previously discussed in the literature: the input size , the length of the shorter string , the length of an LCS of and , the numbers of deletions and , the alphabet size, as well as the numbers of matching pairs and dominant pairs . For any class of instances defined by fixing each parameter individually to a polynomial in terms of the input size, we prove a SETH-based lower bound matching one of three known algorithms. Specifically, we determine the optimal running time for LCS under SETH as . [...
Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve
Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size of data that originally has size , and we want to solve a problem with time complexity . The naive strategy of "decompress-and-solve" gives time , whereas "the gold standard" is time : to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: - The bound for LCS and the bound for Pattern Matching with Wildcards are optimal up to factors, under the Strong Exponential Time Hypothesis. (Here, denotes the uncompressed length of the compressed pattern.) - Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the -Clique conjecture. - We give an algorithm showing that decompress-and-solve is not optimal for Disjointness
Fine-Grained Completeness for Optimization in P
We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime. Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the -XOR problem. Specifically, we define MaxSP as the class of problems definable as , where is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On -sized structures, we can solve each such problem in time . Our results are: - We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under *deterministic* fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of for *all* problems in MaxSP/MinSP by a polynomial factor. - This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic -approximation would give a -approximation for all MaxSP/MinSP problems in time , where can be chosen arbitrarily small. Combining our completeness with~(Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is *equivalent* to giving a -approximation for all MinSP problems in faster-than- time
- …