9 research outputs found

    On the Fine-Grained Complexity of One-Dimensional Dynamic Programming

    Get PDF
    In this paper, we investigate the complexity of one-dimensional dynamic programming, or more specifically, of the Least-Weight Subsequence (LWS) problem: Given a sequence of n data items together with weights for every pair of the items, the task is to determine a subsequence S minimizing the total weight of the pairs adjacent in S. A large number of natural problems can be formulated as LWS problems, yielding obvious O(n^2)-time solutions. In many interesting instances, the O(n^2)-many weights can be succinctly represented. Yet except for near-linear time algorithms for some specific special cases, little is known about when an LWS instantiation admits a subquadratic-time algorithm and when it does not. In particular, no lower bounds for LWS instantiations have been known before. In an attempt to remedy this situation, we provide a general approach to study the fine-grained complexity of succinct instantiations of the LWS problem: Given an LWS instantiation we identify a highly parallel core problem that is subquadratically equivalent. This provides either an explanation for the apparent hardness of the problem or an avenue to find improved algorithms as the case may be. More specifically, we prove subquadratic equivalences between the following pairs (an LWS instantiation and the corresponding core problem) of problems: a low-rank version of LWS and minimum inner product, finding the longest chain of nested boxes and vector domination, and a coin change problem which is closely related to the knapsack problem and (min,+)-convolution. Using these equivalences and known SETH-hardness results for some of the core problems, we deduce tight conditional lower bounds for the corresponding LWS instantiations. We also establish the (min,+)-convolution-hardness of the knapsack problem. Furthermore, we revisit some of the LWS instantiations which are known to be solvable in near-linear time and explain their easiness in terms of the easiness of the corresponding core problems

    Bellman-Ford is optimal for shortest hop-bounded paths

    Get PDF
    This paper is about the problem of finding a shortest ss-tt path using at most hh edges in edge-weighted graphs. The Bellman--Ford algorithm solves this problem in O(hm)O(hm) time, where mm is the number of edges. We show that this running time is optimal, up to subpolynomial factors, under popular fine-grained complexity assumptions. More specifically, we show that under the APSP Hypothesis the problem cannot be solved faster already in undirected graphs with non-negative edge weights. This lower bound holds even restricted to graphs of arbitrary density and for arbitrary h∈O(m)h \in O(\sqrt{m}). Moreover, under a stronger assumption, namely the Min-Plus Convolution Hypothesis, we can eliminate the restriction h∈O(m)h \in O(\sqrt{m}). In other words, the O(hm)O(hm) bound is tight for the entire space of parameters hh, mm, and nn, where nn is the number of nodes. Our lower bounds can be contrasted with the recent near-linear time algorithm for the negative-weight Single-Source Shortest Paths problem, which is the textbook application of the Bellman--Ford algorithm
    corecore