820 research outputs found
RLZAP: Relative Lempel-Ziv with Adaptive Pointers
Relative Lempel-Ziv (RLZ) is a popular algorithm for compressing databases of
genomes from individuals of the same species when fast random access is
desired. With Kuruppu et al.'s (SPIRE 2010) original implementation, a
reference genome is selected and then the other genomes are greedily parsed
into phrases exactly matching substrings of the reference. Deorowicz and
Grabowski (Bioinformatics, 2011) pointed out that letting each phrase end with
a mismatch character usually gives better compression because many of the
differences between individuals' genomes are single-nucleotide substitutions.
Ferrada et al. (SPIRE 2014) then pointed out that also using relative pointers
and run-length compressing them usually gives even better compression. In this
paper we generalize Ferrada et al.'s idea to handle well also short insertions,
deletions and multi-character substitutions. We show experimentally that our
generalization achieves better compression than Ferrada et al.'s implementation
with comparable random-access times
Which Regular Expression Patterns are Hard to Match?
Regular expressions constitute a fundamental notion in formal language theory
and are frequently used in computer science to define search patterns. A
classic algorithm for these problems constructs and simulates a
non-deterministic finite automaton corresponding to the expression, resulting
in an running time (where is the length of the pattern and is
the length of the text). This running time can be improved slightly (by a
polylogarithmic factor), but no significantly faster solutions are known. At
the same time, much faster algorithms exist for various special cases of
regular expressions, including dictionary matching, wildcard matching, subset
matching, word break problem etc.
In this paper, we show that the complexity of regular expression matching can
be characterized based on its {\em depth} (when interpreted as a formula). Our
results hold for expressions involving concatenation, OR, Kleene star and
Kleene plus. For regular expressions of depth two (involving any combination of
the above operators), we show the following dichotomy: matching and membership
testing can be solved in near-linear time, except for "concatenations of
stars", which cannot be solved in strongly sub-quadratic time assuming the
Strong Exponential Time Hypothesis (SETH). For regular expressions of depth
three the picture is more complex. Nevertheless, we show that all problems can
either be solved in strongly sub-quadratic time, or cannot be solved in
strongly sub-quadratic time assuming SETH.
An intriguing special case of membership testing involves regular expressions
of the form "a star of an OR of concatenations", e.g., . This
corresponds to the so-called {\em word break} problem, for which a dynamic
programming algorithm with a runtime of (roughly) is known. We
show that the latter bound is not tight and improve the runtime to
Searching, clustering and evaluating biological sequences
The latest generation of biological sequencing technologies have made
it possible to generate sequence data faster and cheaper than ever
before. The growth of sequence data has been exponential, and so far,
has outpaced the rate of improvement of computer speed and capacity.
This rate of growth, however, makes analysis of new datasets
increasingly difficult, and highlights the need for efficient,
scalable and modular software tools.
Fortunately most types of analysis of sequence data involve a few
fundamental operations. Here we study three such problems, namely
searching for local alignments between two sets of sequences,
clustering sequences, and evaluating the assemblies made from sequence
fragments. We present simple and efficient heuristic algorithms for
these problems, as well as open source software tools which implement
these algorithms.
First, we present approximate seeds; a new type of seed for local
alignment search. Approximate seeds are a generalization of exact
seeds and spaced seeds, in that they allow for insertions and
deletions within the seed. We prove that approximate seeds are
completely sensitive. We also show how to efficiently find approximate
seeds using a suffix array index of the sequences.
Next, we present DNACLUST; a tool for clustering millions of DNA
sequence fragments. Although DNACLUST has been primarily made for
clustering 16S ribosomal RNA sequences, it can be used for other
tasks, such as removing duplicate or near duplicate sequences from a
dataset.
Finally, we present a framework for comparing (two or more) assemblies
built from the same set of reads. Our evaluation requires the set of
reads and the assemblies only, and does not require the true genome
sequence. Therefore our method can be used in de novo assembly
projects, where the true genome is not known. Our score is based on
probability theory, and the true genome is expected to obtain the
maximum score
RasBhari: optimizing spaced seeds for database searching, read mapping and alignment-free sequence comparison
Many algorithms for sequence analysis rely on word matching or word
statistics. Often, these approaches can be improved if binary patterns
representing match and don't-care positions are used as a filter, such that
only those positions of words are considered that correspond to the match
positions of the patterns. The performance of these approaches, however,
depends on the underlying patterns. Herein, we show that the overlap complexity
of a pattern set that was introduced by Ilie and Ilie is closely related to the
variance of the number of matches between two evolutionarily related sequences
with respect to this pattern set. We propose a modified hill-climbing algorithm
to optimize pattern sets for database searching, read mapping and
alignment-free sequence comparison of nucleic-acid sequences; our
implementation of this algorithm is called rasbhari. Depending on the
application at hand, rasbhari can either minimize the overlap complexity of
pattern sets, maximize their sensitivity in database searching or minimize the
variance of the number of pattern-based matches in alignment-free sequence
comparison. We show that, for database searching, rasbhari generates pattern
sets with slightly higher sensitivity than existing approaches. In our Spaced
Words approach to alignment-free sequence comparison, pattern sets calculated
with rasbhari led to more accurate estimates of phylogenetic distances than the
randomly generated pattern sets that we previously used. Finally, we used
rasbhari to generate patterns for short read classification with CLARK-S. Here
too, the sensitivity of the results could be improved, compared to the default
patterns of the program. We integrated rasbhari into Spaced Words; the source
code of rasbhari is freely available at http://rasbhari.gobics.de
A two-phase algorithm for mining sequential patterns with differential privacy
Frequent sequential pattern mining is a central task in many fields such as biology and finance. However, release of these patterns is raising increasing concerns on individual privacy. In this paper, we study the sequential pattern mining problem under the differential privacy framework which provides formal and provable guarantees of privacy. Due to the nature of the differential privacy mecha-nism which perturbs the frequency results with noise, and the high dimensionality of the pattern space, this mining problem is partic-ularly challenging. In this work, we propose a novel two-phase algorithm for mining both prefixes and substring patterns. In the first phase, our approach takes advantage of the statistical proper-ties of the data to construct a model-based prefix tree which is used to mine prefixes and a candidate set of substring patterns. The fre-quency of the substring patterns is further refined in the successive phase where we employ a novel transformation of the original data to reduce the perturbation noise. Extensive experiment results us-ing real datasets showed that our approach is effective for mining both substring and prefix patterns in comparison to the state-of-the-art solutions. Categories and Subject Descriptors H.2.7 [Database Administration]: [Security, integrity, and protec
Effective similarity measures in electronic testing at programming languages
The purpose of this study is to explore the grammatical proper ties and features of generalized n-gram matching technique in electronic test at programming languages. N-gram matching technique has been success fully employed in information handling and decision support system dealing with texts but its side effect is size n which tends to be rather large. Two new methods of odd gram and sumsquare gram have been proposed for the improvement of generalized n-gram matching together with the modification of existing methods. While generalized n-grams matching is easy to generate and manage, they do require quadratic time and space complexity and are therefore ill-suited to the proposed and modified methods which work in quadratic in nature. Experiments have been conducted with the two new methods and modified ones using real life programming code assignments as pattern and text matches and the derived results were compared with the existing methods which are among the best in practice. The results obtained experimentally are very positive and suggested that the proposed methods can be successfully applied in electronic test at programming languages
- …