9 research outputs found
On the Fine-Grained Complexity of One-Dimensional Dynamic Programming
In this paper, we investigate the complexity of one-dimensional dynamic programming, or more specifically, of the Least-Weight Subsequence (LWS) problem: Given a sequence of n data items together with weights for every pair of the items, the task is to determine a subsequence S minimizing the total weight of the pairs adjacent in S. A large number of natural problems can be formulated as LWS problems, yielding obvious O(n^2)-time solutions.
In many interesting instances, the O(n^2)-many weights can be succinctly represented. Yet except for near-linear time algorithms for some specific special cases, little is known about when an LWS instantiation admits a subquadratic-time algorithm and when it does not. In particular, no lower bounds for LWS instantiations have been known before. In an attempt to remedy this situation, we provide a general approach to study the fine-grained complexity of succinct instantiations of the LWS problem: Given an LWS instantiation we identify a highly parallel core problem that is subquadratically equivalent. This provides either an explanation for the apparent hardness of the problem or an avenue to find improved algorithms as the case may be.
More specifically, we prove subquadratic equivalences between the following pairs (an LWS instantiation and the corresponding core problem) of problems: a low-rank version of LWS and minimum inner product, finding the longest chain of nested boxes and vector domination, and a coin change problem which is closely related to the knapsack problem and (min,+)-convolution. Using these equivalences and known SETH-hardness results for some of the core problems, we deduce tight conditional lower bounds for the corresponding LWS instantiations. We also establish the (min,+)-convolution-hardness of the knapsack problem. Furthermore, we revisit some of the LWS instantiations which are known to be solvable in near-linear time and explain their easiness in terms of the easiness of the corresponding core problems
Bellman-Ford is optimal for shortest hop-bounded paths
This paper is about the problem of finding a shortest - path using at
most edges in edge-weighted graphs. The Bellman--Ford algorithm solves this
problem in time, where is the number of edges. We show that this
running time is optimal, up to subpolynomial factors, under popular
fine-grained complexity assumptions.
More specifically, we show that under the APSP Hypothesis the problem cannot
be solved faster already in undirected graphs with non-negative edge weights.
This lower bound holds even restricted to graphs of arbitrary density and for
arbitrary . Moreover, under a stronger assumption, namely
the Min-Plus Convolution Hypothesis, we can eliminate the restriction . In other words, the bound is tight for the entire space
of parameters , , and , where is the number of nodes.
Our lower bounds can be contrasted with the recent near-linear time algorithm
for the negative-weight Single-Source Shortest Paths problem, which is the
textbook application of the Bellman--Ford algorithm
Recommended from our members
The Fine-Grained Complexity of Problems Expressible by First-Order Logic and Its Extensions
This dissertation studies the fine-grained complexity of model checking problems for fixed logical formulas on sparse input structures. The Orthogonal Vectors problem is an important and well-studied problem in fine-grained complexity: its hardness is implied by the Strong Exponential Time Hypothesis, and its hardness implies the hardness of many other interesting problems. We show that the Orthogonal Vectors problem is complete in the class of first-order model checking on sparse structures, under fine-grained reductions. In other words, the hardness of Orthogonal Vectors and the hardness of first-order model checking imply each other. This also gives us an improved algorithm for first-order model checking problems. Among all first-order logic formulas in prenex normal form, we have reasons to believe that quantifier structures and may be the hardest in computational complexity: If the Nondeterministic version of the Strong Exponential Time Hypothesis is true, formulas of these forms are the only hard ones under the Strong Exponential Time Hypothesis. We can add extensions to first-order logic to strengthen its expressive power. This work also studies the fine-grained complexity of first-order formulas with comparison on structures with total order, first-order formulas with transitive closure operations, first-order formulas of fixed quantifier rank, and first-order formulas of fixed variable complexity. We also introduce a technique that can be used to reduce from sequential problems on graphs to parallel problems on sets, which can be applied to extending the Least Weight Subsequence problems from linear structures to some special classes of graphs