183 research outputs found
Pattern Matching and Consensus Problems on Weighted Sequences and Profiles
We study pattern matching problems on two major representations of uncertain
sequences used in molecular biology: weighted sequences (also known as position
weight matrices, PWM) and profiles (i.e., scoring matrices). In the simple
version, in which only the pattern or only the text is uncertain, we obtain
efficient algorithms with theoretically-provable running times using a
variation of the lookahead scoring technique. We also consider a general
variant of the pattern matching problems in which both the pattern and the text
are uncertain. Central to our solution is a special case where the sequences
have equal length, called the consensus problem. We propose algorithms for the
consensus problem parameterized by the number of strings that match one of the
sequences. As our basic approach, a careful adaptation of the classic
meet-in-the-middle algorithm for the knapsack problem is used. On the lower
bound side, we prove that our dependence on the parameter is optimal up to
lower-order terms conditioned on the optimality of the original algorithm for
the knapsack problem.Comment: 22 page
Linear-time Computation of Minimal Absent Words Using Suffix Array
An absent word of a word y of length n is a word that does not occur in y. It
is a minimal absent word if all its proper factors occur in y. Minimal absent
words have been computed in genomes of organisms from all domains of life;
their computation provides a fast alternative for measuring approximation in
sequence comparison. There exists an O(n)-time and O(n)-space algorithm for
computing all minimal absent words on a fixed-sized alphabet based on the
construction of suffix automata (Crochemore et al., 1998). No implementation of
this algorithm is publicly available. There also exists an O(n^2)-time and
O(n)-space algorithm for the same problem based on the construction of suffix
arrays (Pinho et al., 2009). An implementation of this algorithm was also
provided by the authors and is currently the fastest available. In this
article, we bridge this unpleasant gap by presenting an O(n)-time and
O(n)-space algorithm for computing all minimal absent words based on the
construction of suffix arrays. Experimental results using real and synthetic
data show that the respective implementation outperforms the one by Pinho et
al
Towards Distance-Based Phylogenetic Inference in Average-Case Linear-Time
Computing genetic evolution distances among a set of taxa dominates the running time of many phylogenetic inference methods. Most of genetic evolution distance definitions rely, even if indirectly, on computing the pairwise Hamming distance among sequences or profiles. We propose here an average-case linear-time algorithm to compute pairwise Hamming distances among a set of taxa under a given Hamming distance threshold. This article includes both a theoretical analysis and extensive experimental results concerning the proposed algorithm. We further show how this algorithm can be successfully integrated into a well known phylogenetic inference method
Reverse-Safe Data Structures for Text Indexing
We introduce the notion of reverse-safe data structures. These are data structures that prevent the reconstruction of the data they encode (i.e., they cannot be easily reversed). A data structure D is called z-reverse-safe when there exist at least z datasets with the same set of answers as the ones stored by D. The main challenge is to ensure that D stores as many answers to useful queries as possible, is constructed efficiently, and has size close to the size of the original dataset it encodes. Given a text of length n and an integer z, we propose an algorithm which constructs a z-reverse-safe data structure that has size O(n) and answers pattern matching queries of length at most d optimally, where d is maximal for any such z-reverse-safe data structure. The construction algorithm takes O(n ω log d) time, where ω is the matrix multiplication exponent. We show that, despite the n ω factor, our engineered implementation takes only a few minutes to finish for million-letter texts. We further show that plugging our method in data analysis applications gives insignificant or no data utility loss. Finally, we show how our technique can be extended to support applications under a realistic adversary model
Size-constrained Weighted Ancestors with Applications
The weighted ancestor problem on a rooted node-weighted tree is a
generalization of the classic predecessor problem: construct a data structure
for a set of integers that supports fast predecessor queries. Both problems are
known to require time for queries provided
space is available, where is the input
size. The weighted ancestor problem has attracted a lot of attention by the
combinatorial pattern matching community due to its direct application to
suffix trees. In this formulation of the problem, the nodes are weighted by
string depth. This attention has culminated in a data structure for weighted
ancestors in suffix trees with query time and an
-time construction algorithm [Belazzougui et al., CPM 2021]. In
this paper, we consider a different version of the weighted ancestor problem,
where the nodes are weighted by any function that maps the
nodes of to positive integers, such that for any node and if node is a descendant of node , where
is the number of nodes in the subtree rooted at . In the
size-constrained weighted ancestor (SWAQ) problem, for any node of and
any integer , we are asked to return the lowest ancestor of with
weight at least . We show that for any rooted tree with nodes, we can
locate node in time after -time
preprocessing. In particular, this implies a data structure for the SWAQ
problem in suffix trees with query time and
-time preprocessing, when the nodes are weighted by
. We also show several string-processing applications of this
result
Linear-Time Superbubble Identification Algorithm for Genome Assembly
DNA sequencing is the process of determining the exact order of the
nucleotide bases of an individual's genome in order to catalogue sequence
variation and understand its biological implications. Whole-genome sequencing
techniques produce masses of data in the form of short sequences known as
reads. Assembling these reads into a whole genome constitutes a major
algorithmic challenge. Most assembly algorithms utilize de Bruijn graphs
constructed from reads for this purpose. A critical step of these algorithms is
to detect typical motif structures in the graph caused by sequencing errors and
genome repeats, and filter them out; one such complex subgraph class is a
so-called superbubble. In this paper, we propose an O(n+m)-time algorithm to
detect all superbubbles in a directed acyclic graph with n nodes and m
(directed) edges, improving the best-known O(m log m)-time algorithm by Sung et
al
Substring Complexity in Sublinear Space
Shannon's entropy is a definitive lower bound for statistical compression.
Unfortunately, no such clear measure exists for the compressibility of
repetitive strings. Thus, ad-hoc measures are employed to estimate the
repetitiveness of strings, e.g., the size of the Lempel-Ziv parse or the
number of equal-letter runs of the Burrows-Wheeler transform. A more recent
one is the size of a smallest string attractor. Unfortunately, Kempa
and Prezza [STOC 2018] showed that computing is NP-hard. Kociumaka et
al. [LATIN 2020] considered a new measure that is based on the function
counting the cardinalities of the sets of substrings of each length of ,
also known as the substring complexity. This new measure is defined as and lower bounds all the measures previously
considered. In particular, always holds and can be
computed in time using working space. Kociumaka et
al. showed that if is given, one can construct an -sized representation of supporting efficient direct
access and efficient pattern matching queries on . Given that for highly
compressible strings, is significantly smaller than , it is natural
to pose the following question: Can we compute efficiently using
sublinear working space?
It is straightforward to show that any algorithm computing using
space requires time through a reduction
from the element distinctness problem [Yao, SIAM J. Comput. 1994]. We present
the following results: an -time and
-space algorithm to compute , for any ; and
an -time and -space algorithm to
compute , for any
Longest Unbordered Factor in Quasilinear Time
A border u of a word w is a proper factor of w occurring both as a prefix and as a suffix. The maximal unbordered factor of w is the longest factor of w which does not have a border. Here an O(n log n)-time with high probability (or O(n log n log^2 log n)-time deterministic) algorithm to compute the Longest Unbordered Factor Array of w for general alphabets is presented, where n is the length of w. This array specifies the length of the maximal unbordered factor starting at each position of w. This is a major improvement on the running time of the currently best worst-case algorithm working in O(n^{1.5}) time for integer alphabets [Gawrychowski et al., 2015]
- …