1,083 research outputs found
Lagrangian Data-Driven Reduced Order Modeling of Finite Time Lyapunov Exponents
There are two main strategies for improving the projection-based reduced
order model (ROM) accuracy: (i) improving the ROM, i.e., adding new terms to
the standard ROM; and (ii) improving the ROM basis, i.e., constructing ROM
bases that yield more accurate ROMs. In this paper, we use the latter. We
propose new Lagrangian inner products that we use together with Eulerian and
Lagrangian data to construct new Lagrangian ROMs. We show that the new
Lagrangian ROMs are orders of magnitude more accurate than the standard
Eulerian ROMs, i.e., ROMs that use standard Eulerian inner product and data to
construct the ROM basis. Specifically, for the quasi-geostrophic equations, we
show that the new Lagrangian ROMs are more accurate than the standard Eulerian
ROMs in approximating not only Lagrangian fields (e.g., the finite time
Lyapunov exponent (FTLE)), but also Eulerian fields (e.g., the streamfunction).
We emphasize that the new Lagrangian ROMs do not employ any closure modeling to
model the effect of discarded modes (which is standard procedure for
low-dimensional ROMs of complex nonlinear systems). Thus, the dramatic increase
in the new Lagrangian ROMs' accuracy is entirely due to the novel Lagrangian
inner products used to build the Lagrangian ROM basis
Longest Common Subsequence on Weighted Sequences
We consider the general problem of the Longest Common Subsequence (LCS) on weighted sequences. Weighted sequences are an extension of classical strings, where in each position every letter of the alphabet may occur with some probability. Previous results presented a PTAS and noticed that no FPTAS is possible unless P=NP. In this paper we essentially close the gap between upper and lower bounds by improving both. First of all, we provide an EPTAS for bounded alphabets (which is the most natural case), and prove that there does not exist any EPTAS for unbounded alphabets unless FPT=W[1]. Furthermore, under the Exponential Time Hypothesis, we provide a lower bound which shows that no significantly better PTAS can exist for unbounded alphabets. As a side note, we prove that it is sufficient to work with only one threshold in the general variant of the problem
Multivariate Fine-Grained Complexity of Longest Common Subsequence
We revisit the classic combinatorial pattern matching problem of finding a
longest common subsequence (LCS). For strings and of length , a
textbook algorithm solves LCS in time , but although much effort has
been spent, no -time algorithm is known. Recent work
indeed shows that such an algorithm would refute the Strong Exponential Time
Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann,
K\"unnemann FOCS'15].
Despite the quadratic-time barrier, for over 40 years an enduring scientific
interest continued to produce fast algorithms for LCS and its variations.
Particular attention was put into identifying and exploiting input parameters
that yield strongly subquadratic time algorithms for special cases of interest,
e.g., differential file comparison. This line of research was successfully
pursued until 1990, at which time significant improvements came to a halt. In
this paper, using the lens of fine-grained complexity, our goal is to (1)
justify the lack of further improvements and (2) determine whether some special
cases of LCS admit faster algorithms than currently known.
To this end, we provide a systematic study of the multivariate complexity of
LCS, taking into account all parameters previously discussed in the literature:
the input size , the length of the shorter string
, the length of an LCS of and , the numbers of
deletions and , the alphabet size, as well as
the numbers of matching pairs and dominant pairs . For any class of
instances defined by fixing each parameter individually to a polynomial in
terms of the input size, we prove a SETH-based lower bound matching one of
three known algorithms. Specifically, we determine the optimal running time for
LCS under SETH as .
[...]Comment: Presented at SODA'18. Full Version. 66 page
Towards Hardness of Approximation for Polynomial Time Problems
Proving hardness of approximation is a major challenge in the field of fine-grained complexity and conditional lower bounds in P.
How well can the Longest Common Subsequence (LCS) or the Edit Distance be approximated by an algorithm that runs in near-linear time?
In this paper, we make progress towards answering these questions.
We introduce a framework that exhibits barriers for truly subquadratic and deterministic algorithms with good approximation guarantees.
Our framework highlights a novel connection between deterministic approximation algorithms for natural problems in P and circuit lower bounds.
In particular, we discover a curious connection of the following form:
if there exists a delta>0 such that for all eps>0 there is a deterministic (1+eps)-approximation algorithm for LCS on two sequences of length n over an alphabet of size n^{o(1)} that runs in O(n^{2-delta}) time, then a certain plausible hypothesis is refuted, and the class E^NP does not have non-uniform linear size Valiant Series-Parallel circuits.
Thus, designing a "truly subquadratic PTAS" for LCS is as hard as resolving an old open question in complexity theory
- …