44 research outputs found

    Fast Computation of Abelian Runs

    Full text link
    Given a word ww and a Parikh vector P\mathcal{P}, an abelian run of period P\mathcal{P} in ww is a maximal occurrence of a substring of ww having abelian period P\mathcal{P}. Our main result is an online algorithm that, given a word ww of length nn over an alphabet of cardinality σ\sigma and a Parikh vector P\mathcal{P}, returns all the abelian runs of period P\mathcal{P} in ww in time O(n)O(n) and space O(σ+p)O(\sigma+p), where pp is the norm of P\mathcal{P}, i.e., the sum of its components. We also present an online algorithm that computes all the abelian runs with periods of norm pp in ww in time O(np)O(np), for any given norm pp. Finally, we give an O(n2)O(n^2)-time offline randomized algorithm for computing all the abelian runs of ww. Its deterministic counterpart runs in O(n2logσ)O(n^2\log\sigma) time.Comment: To appear in Theoretical Computer Scienc

    Lyndon Arrays in Sublinear Time

    Get PDF
    ?} with ? ? n. In this case, the string can be stored in O(n log ?) bits (or O(n / log_? n) words) of memory, and reading it takes only O(n / log_? n) time. We show that O(n / log_? n) time and words of space suffice to compute the succinct 2n-bit version of the Lyndon array. The time is optimal for w = O(log n). The algorithm uses precomputed lookup tables to perform significant parts of the computation in constant time. This is possible due to properties of periodic substrings, which we carefully analyze to achieve the desired result. We envision that the algorithm has applications in the computation of runs (maximal periodic substrings), where the Lyndon array plays a central role in both theoretically and practically fast algorithms

    Algorithms and Lower Bounds for Ordering Problems on Strings

    Get PDF
    This dissertation presents novel algorithms and conditional lower bounds for a collection of string and text-compression-related problems. These results are unified under the theme of ordering constraint satisfaction. Utilizing the connections to ordering constraint satisfaction, we provide hardness results and algorithms for the following: recognizing a type of labeled graph amenable to text-indexing known as Wheeler graphs, minimizing the number of maximal unary substrings occurring in the Burrows-Wheeler Transformation of a text, minimizing the number of factors occurring in the Lyndon factorization of a text, and finding an optimal reference string for relative Lempel-Ziv encoding

    Processing and indexing large biological datasets using the Burrows-Wheeler Transform of string collections

    Get PDF
    In the last few decades, the advent of next-generation sequencing technologies (NGS) has dramatically reduced the cost of DNA sequencing. This has made it possible to sequence many genomes in very little time, paving the way for projects which aim at the creation of large and repetitive collections of genomic sequences. The abundance of biological data is driving the development of new memory-efficient algorithms and data structures that can scale for large datasets, thus tackling the high computational burden related to processing these data. This trend has a strong impact on the text algorithms area. In this thesis, we will study the Burrows-Wheeler Transform for processing, indexing, and compressing collections of strings. Data compression addresses the problem of encoding the input to reduce the space needed for storing it, while text indexing focuses on finding ways to efficiently process and extract information from the data. In bioinformatics, these two concepts have been frequently used together since they allow the design of data structures that can efficiently process biological data while keeping the input compressed. The Burrows-Wheeler Transform (BWT) is a reversible transformation on strings introduced by Michael Burrows and David J. Wheeler in 1994 that plays a central role in this area. It is the key component of several compressed data structures for text processing, like the FM-index [Ferraggina and Manzini, SODA, 2000] or the r-index [Gagie et al., SODA, 2018], and some of the most important software in bioinformatics, such as the well-known Bowtie [Langmead et al., Genome Biology, 2009] and BWA [Li and Durbin, Bioinformatics, 2010]. The BWT was originally defined for individual strings, so when the focus moved from single sequences to string collections, there was the need to extend this transform. Over the years, several different tools and algorithms for computing BWT of string collections were introduced. However, even if the transforms generated by these tools frequently differ from each other, the problem of characterizing the BWT variants was never addressed properly. In this thesis, we close this gap by presenting the first systematic study of the BWT of string collections. We identified five non-equivalent variants computed by the tools in current use and analyzed their properties to show how exactly they differ. We complete our theoretical analysis by comparing the five BWT variants on several real-life biological datasets. We show that not only the differences among the resulting transforms can be extensive, but they also lead to significant changes in the compressibility of the BWT of the underlying string collection. As a further complication, the BWT variants in use often depend on the input order of the sequences. This significantly impacts the number of runs r, which defines the size of BWT-based compressed data structures. In this thesis, we address the problem of reordering the input sequences by providing the first implementation of the algorithm of Bentley et al. [ESA 2020], which computes the order minimizing the number of runs of the BWT. This leads to the creation of the first tool for computing the optimal BWT, i.e., the BWT variant which guarantees the minimum number of runs. We show experimentally that the input order can dramatically affect the final result: on our real-life datasets, the optimal BWT had up to 31 times fewer runs than the BWT computed without reordering the input sequences. The extended BWT (eBWT) of Mantaci et al. [Theor. Comput. Sci. 2007] is one of the first BWT variants explicitly designed to process string collections. Even though this transform is mathematically sound and has useful properties, its construction has been a problem for more than a decade. In this thesis, we present two linear-time algorithms for computing the eBWT of large string collections. The first is an improvement of the Bijective BWT construction algorithm of Bannai et al. [CPM 2019], while the second uses the Prefix-free parsing (PFP) method [Boucher et al., Algorithms Mol. Biol., 2019] to specifically process large and repetitive genomic sequence collections. In the final part of the thesis, we conclude by studying, for the first time, how to index string collections using the eBWT. We present the extended r-index, an extension of the r-index to the eBWT, which maintains the same performance as the original r-index while inheriting the properties of the eBWT. We implemented this data structure using a variant of the PFP algorithm and tested it on real-life biological datasets containing circular bacterial genomes and plasmids. We show experimentally that our index has competitive query times compared to the r-index on different pattern lengths while supporting advanced pattern matching functionalities on circular sequences

    Grammar Boosting: A New Technique for Proving Lower Bounds for Computation over Compressed Data

    Full text link
    Grammar compression is a general compression framework in which a string TT of length NN is represented as a context-free grammar of size nn whose language contains only TT. In this paper, we focus on studying the limitations of algorithms and data structures operating on strings in grammar-compressed form. Previous work focused on proving lower bounds for grammars constructed using algorithms that achieve the approximation ratio ρ=O(polylog N)\rho=\mathcal{O}(\text{polylog }N). Unfortunately, for the majority of grammar compressors, ρ\rho is either unknown or satisfies ρ=ω(polylog N)\rho=\omega(\text{polylog }N). In their seminal paper, Charikar et al. [IEEE Trans. Inf. Theory 2005] studied seven popular grammar compression algorithms: RePair, Greedy, LongestMatch, Sequential, Bisection, LZ78, and α\alpha-Balanced. Only one of them (α\alpha-Balanced) is known to achieve ρ=O(polylog N)\rho=\mathcal{O}(\text{polylog }N). We develop the first technique for proving lower bounds for data structures and algorithms on grammars that is fully general and does not depend on the approximation ratio ρ\rho of the used grammar compressor. Using this technique, we first prove that Ω(logN/loglogN)\Omega(\log N/\log \log N) time is required for random access on RePair, Greedy, LongestMatch, Sequential, and Bisection, while Ω(loglogN)\Omega(\log\log N) time is required for random access to LZ78. All these lower bounds hold within space O(n polylog N)\mathcal{O}(n\text{ polylog }N) and match the existing upper bounds. We also generalize this technique to prove several conditional lower bounds for compressed computation. For example, we prove that unless the Combinatorial kk-Clique Conjecture fails, there is no combinatorial algorithm for CFG parsing on Bisection (for which it holds ρ=Θ~(N1/2)\rho=\tilde{\Theta}(N^{1/2})) that runs in O(ncN3ϵ)\mathcal{O}(n^c\cdot N^{3-\epsilon}) time for all constants c>0c>0 and ϵ>0\epsilon>0. Previously, this was known only for c<2ϵc<2\epsilon

    Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve

    No full text
    Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size nn of data that originally has size NN, and we want to solve a problem with time complexity T()T(\cdot). The naive strategy of "decompress-and-solve" gives time T(N)T(N), whereas "the gold standard" is time T(n)T(n): to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: - The O(nNlogN/n)O(nN\sqrt{\log{N/n}}) bound for LCS and the O(min{NlogN,nM})O(\min\{N \log N, nM\}) bound for Pattern Matching with Wildcards are optimal up to No(1)N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, MM denotes the uncompressed length of the compressed pattern.) - Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the kk-Clique conjecture. - We give an algorithm showing that decompress-and-solve is not optimal for Disjointness

    Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve

    Get PDF
    Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size nn of data that originally has size NN, and we want to solve a problem with time complexity T()T(\cdot). The naive strategy of "decompress-and-solve" gives time T(N)T(N), whereas "the gold standard" is time T(n)T(n): to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: - The O(nNlogN/n)O(nN\sqrt{\log{N/n}}) bound for LCS and the O(min{NlogN,nM})O(\min\{N \log N, nM\}) bound for Pattern Matching with Wildcards are optimal up to No(1)N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, MM denotes the uncompressed length of the compressed pattern.) - Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the kk-Clique conjecture. - We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.Comment: Presented at FOCS'17. Full version. 63 page

    Sensitivity of the Burrows-Wheeler Transform to small modifications, and other problems on string compressors in Bioinformatics

    Get PDF
    Extensive amount of data is produced in textual form nowadays, especially in bioinformatics. Several algorithms exist to store and process this data efficiently in compressed space. In this thesis, we focus on both combinatorial and practical aspects of two of the most widely used algorithms for compressing text in bioinformatics: the Burrows-Wheeler Transform (BWT) and Lempel-Ziv compression (LZ77). In the first part, we focus on combinatorial aspects of the BWT. Given a word v, r = r(v) denotes the number of maximal equal-letter runs in BWT(v). First, we investigate the relationship between r of a word and r of its reverse. We prove that there exist words for which these two values differ by a logarithmic factor in the length of the word. In other words, although the repetitiveness in the two words is preserved, the number of runs can change by a non-constant factor. This suggests that the number of runs may not be an ideal repetitiveness measure. The second combinatorial aspect we are interested in is how small alterations in a word may affect its BWT in a relevant way. We prove that the number of runs of the BWT of a word can change (increase or decrease) by up to a logarithmic factor in the length of the word by just adding, removing, or substituting a single character. We then consider the special character usedinreallifeapplicationstomarktheendofaword.WeinvestigatetheimpactofthischaracteronwordswithrespecttotheBWT.Wecharacterizepositionsinawordwhere used in real-life applications to mark the end of a word. We investigate the impact of this character on words with respect to the BWT. We characterize positions in a word where can be inserted in order to turn it into the BWT of a terminatedwordoverthesamealphabet.Weshowthat,whetherandwhere-terminated word over the same alphabet. We show that, whether and where is allowed, depends entirely on the structure of a specific permutation of the indices of the word, which is called the standard permutation of the word. The final part of this thesis treats more applied aspects of text compressors. In bioinformatics, BWT-based compressed data structures are widely used for pattern matching. We give an algorithm based on the BWT to find Maximal Unique Matches (MUMs) of a pattern with respect to a reference text in compressed space, extending an existing tool called PHONI [Boucher et. al, DCC 2021]. Finally, we study some aspects of the Lempel-Ziv 77 (LZ77) factorization of a word. Modeling DNA short reads, we provide a bound on the compression size of the concatenation of regular samples of a word

    Multivariate Fine-Grained Complexity of Longest Common Subsequence

    Full text link
    We revisit the classic combinatorial pattern matching problem of finding a longest common subsequence (LCS). For strings xx and yy of length nn, a textbook algorithm solves LCS in time O(n2)O(n^2), but although much effort has been spent, no O(n2ε)O(n^{2-\varepsilon})-time algorithm is known. Recent work indeed shows that such an algorithm would refute the Strong Exponential Time Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann, K\"unnemann FOCS'15]. Despite the quadratic-time barrier, for over 40 years an enduring scientific interest continued to produce fast algorithms for LCS and its variations. Particular attention was put into identifying and exploiting input parameters that yield strongly subquadratic time algorithms for special cases of interest, e.g., differential file comparison. This line of research was successfully pursued until 1990, at which time significant improvements came to a halt. In this paper, using the lens of fine-grained complexity, our goal is to (1) justify the lack of further improvements and (2) determine whether some special cases of LCS admit faster algorithms than currently known. To this end, we provide a systematic study of the multivariate complexity of LCS, taking into account all parameters previously discussed in the literature: the input size n:=max{x,y}n:=\max\{|x|,|y|\}, the length of the shorter string m:=min{x,y}m:=\min\{|x|,|y|\}, the length LL of an LCS of xx and yy, the numbers of deletions δ:=mL\delta := m-L and Δ:=nL\Delta := n-L, the alphabet size, as well as the numbers of matching pairs MM and dominant pairs dd. For any class of instances defined by fixing each parameter individually to a polynomial in terms of the input size, we prove a SETH-based lower bound matching one of three known algorithms. Specifically, we determine the optimal running time for LCS under SETH as (n+min{d,δΔ,δm})1±o(1)(n+\min\{d, \delta \Delta, \delta m\})^{1\pm o(1)}. [...]Comment: Presented at SODA'18. Full Version. 66 page
    corecore