20 research outputs found
Recommended from our members
A Linear-Time Algorithm for Concave One-Dimensional Dynamic Programming
The least weight subsequence problem is a special case of the one-dimensional dynamic programming problem where D[i] = E[i]. The modified edit distance problem, which arises in molecular biology. geology, and speech recognition, can be decomposed into 2n copies of the problem
On three soft rectangle packing problems with guillotine constraints
We investigate how to partition a rectangular region of length and
height into rectangles of given areas using
two-stage guillotine cuts, so as to minimize either (i) the sum of the
perimeters, (ii) the largest perimeter, or (iii) the maximum aspect ratio of
the rectangles. These problems play an important role in the ongoing Vietnamese
land-allocation reform, as well as in the optimization of matrix multiplication
algorithms. We show that the first problem can be solved to optimality in
, while the two others are NP-hard. We propose mixed
integer programming (MIP) formulations and a binary search-based approach for
solving the NP-hard problems. Experimental analyses are conducted to compare
the solution approaches in terms of computational efficiency and solution
quality, for different objectives
Recommended from our members
Speeding up dynamic programming with applications to molecular biology
Consider the problem of computing E[j] = mit:! {D[k] + w(k, j)}, j = 1, ... , n, O~k~]-l where w is a given weight function, D[D] is given and for every k = 1, ... , n, D[k] is easily computable from E[k]. This problem appears as a subproblem in dynamic programming solutions to various problems. Obviously, it can be solved in time O( n2 ), and for a general weight function no better algorithm is possible. We consider two dual cases that arise in applications: In the concave case, the weight function satisfies the quadrangle inequality: w(k,j) + w(l,j') ~ w(l,j) +w(k,j'), for all k ~ 1 ~ j ~ j'. In the convex case, the weight function satisfies the inverse quadrangle inequality. In both cases we show how to use the assumed property of w to derive an O( n log n) algorithm. Even better, linear-time algorithms are obtained if w satisfies the following additional closest zero property: for every two integers 1 and k, 1 < k, and real number a, the smallest zero of f(x) = w(l,x) - w(k,x) - a which is larger than 1 can be found in constant time. Surprisingly, the two algorithms are also dual in the following sense: Both work in stages. In the j-th stage they compute Elj]. They maintain a set of candidates which satisfies the property that Elj] depends only on D[k] + w(k, j) for k's in the set. Moreover, each algorithm discards candidates from the set, and discarded candidates never rejoin the set. To be able to maintain such a set of candidates efficiently one uses the following "dual" data structures: a queue in the concave case and a stack in the convex case. The two algorithms speed up several dynamic programming routines that solve as a subproblem the problem above. The speed-up is from O(n3 ) to O(n2Iogn) or O(n2 ). Applications include algorithms for comparing DNA sequences, algorithms for determining the secondary structure of RNA, and algorithms used in speech recognition and geology. One typical problem is the following: Given the cost of substituting any pair of symbols and a convex cost function g for gaps (where g(r) is the cost of a gap of size r), compute the modified edit distance between the two given sequences
Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms
One of the most fundamental problems in Computer Science is the Knapsack
problem. Given a set of n items with different weights and values, it asks to
pick the most valuable subset whose total weight is below a capacity threshold
T. Despite its wide applicability in various areas in Computer Science,
Operations Research, and Finance, the best known running time for the problem
is O(Tn). The main result of our work is an improved algorithm running in time
O(TD), where D is the number of distinct weights. Previously, faster runtimes
for Knapsack were only possible when both weights and values are bounded by M
and V respectively, running in time O(nMV) [Pisinger'99]. In comparison, our
algorithm implies a bound of O(nM^2) without any dependence on V, or O(nV^2)
without any dependence on M. Additionally, for the unbounded Knapsack problem,
we provide an algorithm running in time O(M^2) or O(V^2). Both our algorithms
match recent conditional lower bounds shown for the Knapsack problem [Cygan et
al'17, K\"unnemann et al'17].
We also initiate a systematic study of general capacitated dynamic
programming, of which Knapsack is a core problem. This problem asks to compute
the maximum weight path of length k in an edge- or node-weighted directed
acyclic graph. In a graph with m edges, these problems are solvable by dynamic
programming in time O(km), and we explore under which conditions the dependence
on k can be eliminated. We identify large classes of graphs where this is
possible and apply our results to obtain linear time algorithms for the problem
of k-sparse Delta-separated sequences. The main technical innovation behind our
results is identifying and exploiting concavity that appears in relaxations and
subproblems of the tasks we consider
On the Fine-Grained Complexity of One-Dimensional Dynamic Programming
In this paper, we investigate the complexity of one-dimensional dynamic programming, or more specifically, of the Least-Weight Subsequence (LWS) problem: Given a sequence of n data items together with weights for every pair of the items, the task is to determine a subsequence S minimizing the total weight of the pairs adjacent in S. A large number of natural problems can be formulated as LWS problems, yielding obvious O(n^2)-time solutions.
In many interesting instances, the O(n^2)-many weights can be succinctly represented. Yet except for near-linear time algorithms for some specific special cases, little is known about when an LWS instantiation admits a subquadratic-time algorithm and when it does not. In particular, no lower bounds for LWS instantiations have been known before. In an attempt to remedy this situation, we provide a general approach to study the fine-grained complexity of succinct instantiations of the LWS problem: Given an LWS instantiation we identify a highly parallel core problem that is subquadratically equivalent. This provides either an explanation for the apparent hardness of the problem or an avenue to find improved algorithms as the case may be.
More specifically, we prove subquadratic equivalences between the following pairs (an LWS instantiation and the corresponding core problem) of problems: a low-rank version of LWS and minimum inner product, finding the longest chain of nested boxes and vector domination, and a coin change problem which is closely related to the knapsack problem and (min,+)-convolution. Using these equivalences and known SETH-hardness results for some of the core problems, we deduce tight conditional lower bounds for the corresponding LWS instantiations. We also establish the (min,+)-convolution-hardness of the knapsack problem. Furthermore, we revisit some of the LWS instantiations which are known to be solvable in near-linear time and explain their easiness in terms of the easiness of the corresponding core problems
Distribution-aware compressed full-text indexes
In this paper we address the problem of building a compressed self-index that, given a distribution for the pattern queries and a bound on the space occupancy, minimizes the expected query time within that index space bound. We solve this problem by exploiting a reduction to the problem of finding a minimum weight K-link path in a properly designed Directed Acyclic Graph. Interestingly enough, our solution can be used with any compressed index based on the Burrows-Wheeler transform. Our experiments compare this optimal strategy with several other known approaches, showing its effectiveness in practice
Recommended from our members
Sparse Dynamic Programming II: Convex and Concave Cost Functions
We consider dynamic programming solutions to a number of different recurrences for sequence comparison and for RNA secondary structure prediction. These recurrences are defined over a number of points that is quadratic in the input size; however only a sparse set matters for the result. We give efficient algorithms for these problems, when the weight functions used in the recurrences are taken to be linear. Our algorithms reduce the best known bounds by a factor almost linear in the density of the problems: when the problems are sparse this results in a substantial speed-up