8 research outputs found

    Range Quantile Queries: Another Virtue of Wavelet Trees

    Full text link
    We show how to use a balanced wavelet tree as a data structure that stores a list of numbers and supports efficient {\em range quantile queries}. A range quantile query takes a rank and the endpoints of a sublist and returns the number with that rank in that sublist. For example, if the rank is half the sublist's length, then the query returns the sublist's median. We also show how these queries can be used to support space-efficient {\em coloured range reporting} and {\em document listing}.Comment: Added note about generalization to any constant number of dimensions

    Compressed String Dictionary Search with Edit Distance One

    Get PDF
    In this paper we present different solutions for the problem of indexing a dictionary of strings in compressed space. Given a pattern (Formula presented.) , the index has to report all the strings in the dictionary having edit distance at most one with (Formula presented.). Our first solution is able to solve queries in (almost optimal) (Formula presented.) time where (Formula presented.) is the number of strings in the dictionary having edit distance at most one with (Formula presented.). The space complexity of this solution is bounded in terms of the (Formula presented.) th order entropy of the indexed dictionary. A second solution further improves this space complexity at the cost of increasing the query time. Finally, we propose randomized solutions (Monte Carlo and Las Vegas) which achieve simultaneously the time complexity of the first solution and the space complexity of the second one

    Pseudo-random access compressed archive for security log data

    Get PDF
    We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times

    Perushakurakenteiden tehokas muodostus suurille tekstimassoille

    Get PDF
    This thesis studies efficient algorithms for constructing the most fundamental data structures used as building blocks in (compressed) full-text indexes. Full-text indexes are data structures that allow efficiently searching for occurrences of a query string in a (much larger) text. We are mostly interested in large-scale indexing, that is, dealing with input instances that cannot be processed entirely in internal memory and thus a much slower, external memory needs to be used. Specifically, we focus on three data structures: the suffix array, the LCP array and the Lempel-Ziv (LZ77) parsing. These are routinely found as components or used as auxiliary data structures in the construction of many modern full-text indexes. The suffix array is a list of all suffixes of a text in lexicographical order. Despite its simplicity, the suffix array is a powerful tool used extensively not only in indexing but also in data compression, string combinatorics or computational biology. The first contribution of this thesis is an improved algorithm for external memory suffix array construction based on constructing suffix arrays for blocks of text and merging them into the full suffix array. In many applications, the suffix array needs to be augmented with the information about the longest common prefix between each two adjacent suffixes in lexicographical order. The array containing such information is called the longest-common-prefix (LCP) array. The second contribution of this thesis is the first algorithm for computing the LCP array in external memory that is not an extension of a suffix-sorting algorithm. When the input text is highly repetitive, the general-purpose text indexes are usually outperformed (particularly in space usage) by specialized indexes. One of the most popular families of such indexes is based on the Lempel-Ziv (LZ77) parsing. LZ77 parsing is the encoding of text that replaces long repeating substrings with references to other occurrences. In addition to indexing, LZ77 is a heavily used tool in data compression. The third contribution of this thesis is a series of new algorithms to compute the LZ77 parsing, both in RAM and in external memory. The algorithms introduced in this thesis significantly improve upon the prior art. For example: (i) our new approach for constructing the LCP array in external memory is faster than the previously best algorithm by a factor of 2-4 and simultaneously reduces the disk space usage by a factor of four; (ii) a parallel version of our improved suffix array construction algorithm is able to handle inputs much larger than considered in the literature so far. In our experiments, computing the suffix array of a 1 TiB file with the new algorithm took a little over a week and required only 7.2 TiB of disk space (including input and output), whereas on the same machine the previously best algorithm would require 3.5 times as much disk space and take about four times longer.Tutkielman aiheena olevilla algoritmeilla voidaan tehokkaasti muodostaa perustietorakenteita, joita käytetään rakennuspalikoina (tiivistetyissä) tekstihakurakenteissa. Tekstihakurakenteet ovat tietorakenteita, jotka mahdollistavat tehokkaat merkkijonohaut tekstissä. Pääasiallisena kiinnostuksen kohteena ovat algoritmit suurille tekstimassoille, joita ei pystytä käsittelemään keskusmuistissa, ja jotka siksi vaativat paljon hitaamman ulkoisen muistin käyttöä. Kohdetietorakenteita on kolme: loppuosataulukko, LCP-taulukko ja Lempel-Ziv (LZ77) jäsennys. Näitä käytetään laajasti komponentteina tai välivaiheina modernien tekstihakurakenteiden muodostamisessa. Loppuosataulukko listaa tekstin kaikki loppuosat aakkosjärjestyksessä. Yksinkertaisuudestaan huolimatta loppuosataulukko on tehokas työkalu, jota käytetään laajalti ei vain tekstihakurakenteissa vaan myös tekstintiivistyksessä, merkkijonokombinatoriikassa ja laskennallisessa biologiassa. Tutkielman ensimmäinen tulos on parannettu algoritmi loppuosataulukon muodostamiseen ulkoisessa muistissa perustuen tekstin osille muodostettujen loppuosataulukkojen yhdistämiseen koko tekstin loppuosataulukoksi. Monissa sovelluksissa loppuosataulukon rinnalla tarvitaan tietoa aakkosellisesti vierekkäisten loppuosien pisimpien yhteisten alkuosien pituuksista. Tämän tiedon sisältävää taulukkoa sanotaan LCP (longest common prefix) taulukoksi. Tutkielman toinen tulos on ensimmäinen LCP taulukon ulkoisessa muistissa muodostava algoritmi, joka ei ole loppuosataulukonmuodostusalgoritmin laajennus. Vahvasti toisteiselle tekstille on olemassa erikoistuneita tekstihakurakenteita, jotka ovat yleiskäyttöisiä tekstihakurakenteita tehokkaampia (erityisesti muistin käytön suhteen). Yksi suosituimmista tällaisista hakurakenneperheistä perustuu Lempel-Ziv (LZ77) jäsennykseen. LZ77-jäsennys on tekstin tallennusmuoto, jossa pitkät toistuvat osajonot on korvattu viittauksilla aiempiin esiintymiin. Tekstihakurakenteiden lisäksi LZ77-jäsennystä käytetään laajasti tekstintiivistyksessä. Tutkielman kolmas osuus on sarja uusia algoritmeja LZ77-jäsennyksen muodostamiseen, sekä sisäisessä että ulkoisessa muistissa. Tutkielmassa esitellyt algoritmit ovat merkittävä parannus aiempaan tilanteeseen. Esimerkiksi: (i) uusi menetelmä LCP-taulukon muodostamiseen ulkoisessa muistissa on 2-4 kertaa aiempia menetelmiä nopeampi ja samanaikaisesti vähentää levytilankäytön neljännekseen; (ii) parannettu loppuosataulukonmuodostusalgoritmi mahdollistaa paljon aiemmin nähtyjä suurempien syötteiden käsittelyn. Kokeissa yhden teratavun kokoisen tiedoston loppuosataulukon muodostaminen vei vähän yli viikon ja vaati vain 7,2 teratavua levytilaa (syöte ja tulos mukaanlukien), kun aiemmat menetelmät olisivat vaatineet 3,5-kertaisen määrän levytilaa ja vieneet noin nelinkertaisen ajan

    Space efficient algorithms for string processing

    Get PDF
    The suffix array (SA), which is an array containing the suffixes of a string sorted into lexicographical order, was introduced in the late eighties as a space efficient alternative to the suffix tree. It has since emerged as a useful data structure in string processing problems such as pattern matching, pattern discovery, and data compression. The SA is often coupled with the longest-common-prefix (LCP) array that contains the length of the longest common prefixes between consecutive suffixes in the SA. When enhanced with the LCP array, the SA can provide efficient solutions to the above applications including a problem called pattern mining. To date, all the mining algorithms lie at either extreme of the efficiency spectrum: they are either fast and use enormous amounts of space, or they are compact and orders of magnitude slower. We present a mining algorithm that achieves the best of both these extremes, having runtime comparable to the fastest published algorithms while using less space than the most space efficient. In all these applications, the construction of the SA --- also known as suffix sorting --- is one of the main computational bottlenecks. Most papers describing the SA assume the SA fits in RAM memory, limiting their applications. The fastest algorithms in this large memory suffix sorting category use powerful pointer copying heuristics to expedite suffix sorting. Several space efficient algorithms have emerged in the last five years, where the trend is to use as little RAM as possible. They do so by finding a clever way to trade runtime, or by using slow compressed data structures, or by using external memory (disk), or some combination of these techniques. In this thesis, we focus on improving the runtime of a space efficient algorithm due to Kärkkäinen by adapting the heuristics from large memory suffix sorting to a semi-external setting. Also, pointer copying has been heavily used to speed up the construction of the SA, but not the LCP array. We also discuss our attempts of combining the pointer copying heuristics to an efficient LCP construction algorithm due to Kärkkäinen, Manzini and Puglisi. The Burrows-Wheeler transform (BWT) was discovered independently of the SA, but it is now known that the two data structures are deeply linked. The BWT is central to practical compression tools such as szip and bzip2. Many papers have been published on constructing the BWT either in RAM or in external memory but few on inverting the BWT to obtain the original string --- in fact none in external memory. For larger datasets, the existing traditional approaches cannot be used to invert the BWT. In such cases, we have to use disk. We close the gap between theory and practice by examining the problem of inverting the BWT efficiently on disk. We provide a practical implementation of the only theoretical proposal for the problem by Ferragina, Gagie and Manzini. We also provide new, faster solutions to the problem based on simple scanning and compression techniques

    Experiments on Compressed Full-Text Indexing

    No full text
    corecore