36 research outputs found

    New Algorithms and Lower Bounds for Sequential-Access Data Compression

    Get PDF
    This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.Comment: draft of PhD thesi

    Postings List Compression with Run-length and Zombit Encodings

    Get PDF
    Inverted indices is a core index structure for different low-level structures, like search engines and databases. It stores a mapping from terms, numbers etc. to list of location in document, set of documents, database, table etc. and allows efficient full-text searches on indexed structure. Mapping location in the inverted indicies is usually called a postings list. In real life applications, scale of the inverted indicies size can grow huge. Therefore efficient representation of it is needed, but at the same time, efficient queries must be supported. This thesis explores ways to represent postings lists efficiently, while allowing efficient nextGEQ queries on the set. Efficient nextGEQ queries is needed to implement inverted indicies. First we convert postings lists into one bitvector, which concatenates each postings list's characteristic bitvector. Then representing an integer set efficiently converts to representing this bitvector efficiently, which is expected to have long runs of 0s and 1s. Run-length encoding of bitvector have recently led to promising results. Therefore in this thesis we experiment two encoding methods (Top-k Hybrid coder, RLZ) that encode postings lists via run-length encodes of the bitvector. We also investigate another new bitvector compression method (Zombit-vector), which encodes bitvectors by finding redundancies of runs of 0/1s. We compare all encoding to current state-of-the-art Partitioned Elisa-Fano (PEF) coding. Compression results on all encodings were more efficient than the current state-of-the-art PEF encoding. Zombit-vector nextGEQ query results were slighty more efficient than PEF's, which make it more attractive with bitvectors that have long runs of 0s and 1s. More work is needed with Top-k Hybrid coder and RLZ, so that those encodings nextGEQ can be compared to Zombit-vector and PEF

    Parameterized Strings: Algorithms and Data Structures

    Get PDF
    A parameterized string (p-string) T = T[1] T[2]...T[n] is a sophisticated string of length n composed of symbols from a constant alphabet Sigma and a parameter alphabet pi. Given a pair of p-strings S and T, the parameterized pattern matching (p-match) problem is to verify whether the individual constant symbols match and whether there exists a bijection between the parameter symbols of S and T. If the two conditions are met, S is said to be a p-match of T. A significant breakthrough in the p-match area is the prev encoding, which is proven to identify a p-match between S and T if and only if prev(S) == prev(T). In order to utilize suffix data structures in terms of p-matching, we must account for the dynamic nature of the parameterized suffixes (p-suffixes) of T, namely prev(T[ i...n]) ∀ i, 1 ≤ i ≤ n.;In this work, we propose transformative approaches to the direct parameterized suffix sorting (p-suffix sorting) problem by generating and sorting lexicographically numeric fingerprints and arithmetic codes that correspond to individual p-suffixes. Our algorithm to p-suffix sort via fingerprints is the first theoretical linear time algorithm for p-suffix sorting for non-binary parameter alphabets, which assumes that each code is represented by a practical integer. We eliminate the key problems of fingerprints by introducing an algorithm that exploits the ordering of arithmetic codes to sort p-suffixes in linear time on average.;The longest previous factor (LPF) problem is defined for traditional strings exclusively from the constant alphabet Sigma. We generalize the LPF problem to the parameterized longest previous factor (pLPF) problem defined for p-strings. Subsequently, we present a linear time solution to construct the pLPF array. Given our pLPF algorithm, we show how to construct the pLCP (parameterized longest common prefix) array in linear time. Our algorithm is further exploited to construct the standard LPF and LCP arrays all in linear time.;We then study the structural string (s-string), a variant of the p-string that extends the p-string alphabets to include complementary parameters that correspond to one another. The s-string problem involves the new encoding schemes sencode and compl in order to identify a structural match (s-match). Current s-match solutions use a structural suffix tree (s-suffix tree) to study structural matches in RNA sequences. We introduce the suffix array, LCP, and LPF data structures for the s-string encoding schemes. Using our new data structures, we identify the first suffix array solution to the s-match problem. Our algorithms and data structures are shown to apply to s-strings and also p-strings and traditional strings

    Yang-Baxter Equations, Computational Methods and Applications

    Full text link
    Computational methods are an important tool for solving the Yang-Baxter equations(in small dimensions), for classifying (unifying) structures, and for solving related problems. This paper is an account of some of the latest developments on the Yang-Baxter equation, its set-theoretical version, and its applications. We construct new set-theoretical solutions for the Yang-Baxter equation. Unification theories and other results are proposed or proved.Comment: 12 page

    Burrows–Wheeler compression: Principles and reflections

    Get PDF
    AbstractAfter a general description of the Burrows–Wheeler transform and a brief survey of recent work on processing its output, the paper examines the coding of the zero-runs from the MTF recoding stage, an aspect with little prior treatment. It is concluded that the original scheme proposed by Wheeler is extremely efficient and unlikely to be much improved.The paper then proposes some new interpretations and uses of the Burrows–Wheeler transform, with new insights and approaches to lossless compression, perhaps including techniques from error correction

    Compressing dictionaries of strings

    Get PDF
    The aim of this work is to develop a data structure capable of storing a set of strings in a compressed way providing the facility to access and search by prefix any string in the set. The notion of string will be formally exposed in this work, but it is enough to think a string as a stream of characters or a variable length dat}. We will prove that the data structure devised in our work will be able to search prefixes of the stored strings in a very efficient way, hence giving a performant solution to one of the most discussed problem of our age. In the discussion of our data structure, particular emphasis will be given to both space and time efficiency and a tradeoff between these two will be constantly searched. To understand how much string based data structures are important, think about modern search engines and social networks; they must store and process continuously immense streams of data which are mainly strings, while the output of such processed data must be available in few milliseconds not to try the patience of the user. Space efficiency is one of the main concern in this kind of problem. In order to satisfy real-time latency bounds, the largest possible amount of data must be stored in the highest levels of the memory hierarchy. Moreover, data compression allows to save money because it reduces the amount of physical memory needed to store abstract data and this particularly important since storage is the main source of expenditure in modern systems

    LeCo: Lightweight Compression via Learning Serial Correlations

    Full text link
    Lightweight data compression is a key technique that allows column stores to exhibit superior performance for analytical queries. Despite a comprehensive study on dictionary-based encodings to approach Shannon's entropy, few prior works have systematically exploited the serial correlation in a column for compression. In this paper, we propose LeCo (i.e., Learned Compression), a framework that uses machine learning to remove the serial redundancy in a value sequence automatically to achieve an outstanding compression ratio and decompression performance simultaneously. LeCo presents a general approach to this end, making existing (ad-hoc) algorithms such as Frame-of-Reference (FOR), Delta Encoding, and Run-Length Encoding (RLE) special cases under our framework. Our microbenchmark with three synthetic and six real-world data sets shows that a prototype of LeCo achieves a Pareto improvement on both compression ratio and random access speed over the existing solutions. When integrating LeCo into widely-used applications, we observe up to 3.9x speed up in filter-scanning a Parquet file and a 16% increase in Rocksdb's throughput

    Feature- Based and String-Based Models for Predicting RNA-Protein Interaction

    Get PDF
    In this work, we study two approaches for the problem of RNA-Protein Interaction (RPI). In the first approach, we use a feature-based technique by combining extracted features from both sequences and secondary structures. The feature-based approach enhanced the prediction accuracy as it included much more available information about the RNA-protein pairs. In the second approach, we apply search algorithms and data structures to extract effective string patterns for prediction of RPI, using both sequence information (protein and RNA sequences), and structure information (protein and RNA secondary structures). This led to different string-based models for predicting interacting RNA-protein pairs. We show results that demonstrate the effectiveness of the proposed approaches, including comparative results against leading state-of-the-art methods

    Empirical analysis of BWT-based lossless image compression

    Get PDF
    The Burrows-Wheeler Transformation (BWT) is a text transformation algorithm originally designed to improve the coherence in text data. This coherence can be exploited by compression algorithms such as run-length encoding or arithmetic coding. However, there is still a debate on its performance on images. Motivated by a theoretical analysis of the performance of BWT and MTF, we perform a detailed empirical study on the role of MTF in compressing images with the BWT. This research studies the compression performance of BWT on digital images using different predictors and context partitions. The major interest of the research is in finding efficient ways to make BWT suitable for lossless image compression.;This research studied three different approaches to improve the compression of image data by BWT. First, the idea of preprocessing the image data before sending it to the BWT compression scheme is studied by using different mapping and prediction schemes. Second, different variations of MTF were investigated to see which one works best for Image compression with BWT. Third, the concept of context partitioning for BWT output before it is forwarded to the next stage in the compression scheme.;For lossless image compression, this thesis proposes the removal of the MTF stage from the BWT compression pipeline and the usage of context partitioning method. The compression performance is further improved by using MED predictor on the image data along with the 8-bit mapping of the prediction residuals before it is processed by BWT.;This thesis proposes two schemes for BWT-based image coding, namely BLIC and BLICx, the later being based on the context-ordering property of the BWT. Our methods outperformed other text compression algorithms such as PPM, GZIP, direct BWT, and WinZip in compressing images. Final results showed that our methods performed better than the state of the art lossless image compression algorithms, such as JPEG-LS, JPEG2000, CALIC, EDP and PPAM on the natural images

    Implementación de un algoritmo de codificación universal eficiente usando árboles de contexto

    Get PDF
    En este trabajo se propone una nueva variante del algoritmo context [28] usando codificacion semi-predictiva y se desarrolla una implementacion eficiente del mismo. Para manejar los problemas relacionados con la gran cantidad de parametros tıpicos de los archivos de texto y ejecutables y mejorar el nivel de compresi´on del algoritmo, se adaptaron algunas t´ecnicas principalmente de algoritmos tipo PPM [8]. El procedimiento de podado y el estimador de probabilidad fueron adaptados para tener en cuenta estos cambios. Ademas se uso una version modificada del algoritmo wotd [11] para podar el ´arbol durante su construccion reduciendo ası los requerimientos de memoria durante la fase de codificacion del algoritmo. Finalmente se realizo una comparacion experimental entre el algoritmo desarrollado aquı y algunas de las mejores implementaciones de otros algoritmos de codificacion conocidos
    corecore