4 research outputs found

    Prefix-free parsing for building large tunnelled Wheeler graphs

    Full text link
    We propose a new technique for creating a space-efficient index for large repetitive text collections, such as pangenomic databases containing sequences of many individuals from the same species. We combine two recent techniques from this area: Wheeler graphs (Gagie et al., 2017) and prefix-free parsing (PFP, Boucher et al., 2019). Wheeler graphs (WGs) are a general framework encompassing several indexes based on the Burrows-Wheeler transform (BWT), such as the FM-index. Wheeler graphs admit a succinct representation which can be further compacted by employing the idea of tunnelling, which exploits redundancies in the form of parallel, equally-labelled paths called blocks that can be merged into a single path. The problem of finding the optimal set of blocks for tunnelling, i.e. the one that minimizes the size of the resulting WG, is known to be NP-complete and remains the most computationally challenging part of the tunnelling process. To find an adequate set of blocks in less time, we propose a new method based on the prefix-free parsing (PFP). The idea of PFP is to divide the input text into phrases of roughly equal sizes that overlap by a fixed number of characters. The original text is represented by a sequence of phrase ranks (the parse) and a list of all used phrases (the dictionary). In repetitive texts, the PFP of the text is generally much shorter than the original. To speed up the block selection for tunnelling, we apply the PFP to obtain the parse and the dictionary of the text, tunnel the WG of the parse using existing heuristics and subsequently use this tunnelled parse to construct a compact WG of the original text. Compared with constructing a WG from the original text without PFP, our method is much faster and uses less memory on collections of pangenomic sequences. Therefore, our method enables the use of WGs as a pangenomic reference for real-world datasets.Comment: 12 pages, 3 figures, 2 tables, to be published in the WABI (Workshop on Algorithms in Bioinformatics) 2022 conference proceeding

    Space-efficient conversions from SLPs

    Full text link
    We give algorithms that, given a straight-line program (SLP) with gg rules that generates (only) a text T[1..n]T [1..n], builds within O(g)O(g) space the Lempel-Ziv (LZ) parse of TT (of zz phrases) in time O(nlog2n)O(n\log^2 n) or in time O(gzlog2(n/z))O(gz\log^2(n/z)). We also show how to build a locally consistent grammar (LCG) of optimal size glc=O(δlognδ)g_{lc} = O(\delta\log\frac{n}{\delta}) from the SLP within O(g+glc)O(g+g_{lc}) space and in O(nlogg)O(n\log g) time, where δ\delta is the substring complexity measure of TT. Finally, we show how to build the LZ parse of TT from such a LCG within O(glc)O(g_{lc}) space and in time O(zlog2nlog2(n/z))O(z\log^2 n \log^2(n/z)). All our results hold with high probability

    MARIA: Multiple-alignment rr-index with aggregation

    Full text link
    There now exist compact indexes that can efficiently list all the occurrences of a pattern in a dataset consisting of thousands of genomes, or even all the occurrences of all the pattern's maximal exact matches (MEMs) with respect to the dataset. Unless we are lucky and the pattern is specific to only a few genomes, however, we could be swamped by hundreds of matches -- or even hundreds per MEM -- only to discover that most or all of the matches are to substrings that occupy the same few columns in a multiple alignment. To address this issue, in this paper we present a simple and compact data index MARIA that stores a multiple alignment such that, given the position of one match of a pattern (or a MEM or other substring of a pattern) and its length, we can quickly list all the distinct columns of the multiple alignment where matches start
    corecore