1,251 research outputs found
Parallel Longest Common SubSequence Analysis In Chapel
One of the most critical problems in the field of string algorithms is the
longest common subsequence problem (LCS). The problem is NP-hard for an
arbitrary number of strings but can be solved in polynomial time for a fixed
number of strings. In this paper, we select a typical parallel LCS algorithm
and integrate it into our large-scale string analysis algorithm library to
support different types of large string analysis. Specifically, we take
advantage of the high-level parallel language, Chapel, to integrate Lu and
Liu's parallel LCS algorithm into Arkouda, an open-source framework. Through
Arkouda, data scientists can easily handle large string analytics on the
back-end high-performance computing resources from the front-end Python
interface. The Chapel-enabled parallel LCS algorithm can identify the longest
common subsequences of two strings, and experimental results are given to show
how the number of parallel resources and the length of input strings can affect
the algorithm's performance.Comment: The 27th Annual IEEE High Performance Extreme Computing Conference
(HPEC), Virtual, September 25-29, 202
Multivariate Fine-Grained Complexity of Longest Common Subsequence
We revisit the classic combinatorial pattern matching problem of finding a
longest common subsequence (LCS). For strings and of length , a
textbook algorithm solves LCS in time , but although much effort has
been spent, no -time algorithm is known. Recent work
indeed shows that such an algorithm would refute the Strong Exponential Time
Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann,
K\"unnemann FOCS'15].
Despite the quadratic-time barrier, for over 40 years an enduring scientific
interest continued to produce fast algorithms for LCS and its variations.
Particular attention was put into identifying and exploiting input parameters
that yield strongly subquadratic time algorithms for special cases of interest,
e.g., differential file comparison. This line of research was successfully
pursued until 1990, at which time significant improvements came to a halt. In
this paper, using the lens of fine-grained complexity, our goal is to (1)
justify the lack of further improvements and (2) determine whether some special
cases of LCS admit faster algorithms than currently known.
To this end, we provide a systematic study of the multivariate complexity of
LCS, taking into account all parameters previously discussed in the literature:
the input size , the length of the shorter string
, the length of an LCS of and , the numbers of
deletions and , the alphabet size, as well as
the numbers of matching pairs and dominant pairs . For any class of
instances defined by fixing each parameter individually to a polynomial in
terms of the input size, we prove a SETH-based lower bound matching one of
three known algorithms. Specifically, we determine the optimal running time for
LCS under SETH as .
[...]Comment: Presented at SODA'18. Full Version. 66 page
- …