82,549 research outputs found

    Human-centered compression for efficient text input

    Get PDF
    Traditional methods for efficient text entry are based on prediction. Prediction requires a constant context-shift between entering text and selecting or verifying the predictions. Previous research has shown that the advantages offered by prediction are usually eliminated by the cognitive load associated with such context-switching. We present a novel approach that relies on compression. Users are required to compress text using a very simple abbreviation technique that yields an average keystrok reduction of 26.4%. Input text is automatically decoded using weighted finite-state transducers, incorporating both word-based and letter-based n-gram language models. Decoding yields a residual error rate of 3.3%. User experiments show that this approach yields improved text input speeds

    LLMZip: Lossless Text Compression using Large Language Models

    Full text link
    We provide new estimates of an asymptotic upper bound on the entropy of English using the large language model LLaMA-7B as a predictor for the next token given a window of past tokens. This estimate is significantly smaller than currently available estimates in \cite{cover1978convergent}, \cite{lutati2023focus}. A natural byproduct is an algorithm for lossless compression of English text which combines the prediction from the large language model with a lossless compression scheme. Preliminary results from limited experiments suggest that our scheme outperforms state-of-the-art text compression schemes such as BSC, ZPAQ, and paq8h.Comment: 7 pages, 4 figures, 4 tables, preprint, added results on using LLMs with arithmetic codin

    Empirical analysis of BWT-based lossless image compression

    Get PDF
    The Burrows-Wheeler Transformation (BWT) is a text transformation algorithm originally designed to improve the coherence in text data. This coherence can be exploited by compression algorithms such as run-length encoding or arithmetic coding. However, there is still a debate on its performance on images. Motivated by a theoretical analysis of the performance of BWT and MTF, we perform a detailed empirical study on the role of MTF in compressing images with the BWT. This research studies the compression performance of BWT on digital images using different predictors and context partitions. The major interest of the research is in finding efficient ways to make BWT suitable for lossless image compression.;This research studied three different approaches to improve the compression of image data by BWT. First, the idea of preprocessing the image data before sending it to the BWT compression scheme is studied by using different mapping and prediction schemes. Second, different variations of MTF were investigated to see which one works best for Image compression with BWT. Third, the concept of context partitioning for BWT output before it is forwarded to the next stage in the compression scheme.;For lossless image compression, this thesis proposes the removal of the MTF stage from the BWT compression pipeline and the usage of context partitioning method. The compression performance is further improved by using MED predictor on the image data along with the 8-bit mapping of the prediction residuals before it is processed by BWT.;This thesis proposes two schemes for BWT-based image coding, namely BLIC and BLICx, the later being based on the context-ordering property of the BWT. Our methods outperformed other text compression algorithms such as PPM, GZIP, direct BWT, and WinZip in compressing images. Final results showed that our methods performed better than the state of the art lossless image compression algorithms, such as JPEG-LS, JPEG2000, CALIC, EDP and PPAM on the natural images

    On Prediction Using Variable Order Markov Models

    Full text link
    This paper is concerned with algorithms for prediction of discrete sequences over a finite alphabet, using variable order Markov models. The class of such algorithms is large and in principle includes any lossless compression algorithm. We focus on six prominent prediction algorithms, including Context Tree Weighting (CTW), Prediction by Partial Match (PPM) and Probabilistic Suffix Trees (PSTs). We discuss the properties of these algorithms and compare their performance using real life sequences from three domains: proteins, English text and music pieces. The comparison is made with respect to prediction quality as measured by the average log-loss. We also compare classification algorithms based on these predictors with respect to a number of large protein classification tasks. Our results indicate that a "decomposed" CTW (a variant of the CTW algorithm) and PPM outperform all other algorithms in sequence prediction tasks. Somewhat surprisingly, a different algorithm, which is a modification of the Lempel-Ziv compression algorithm, significantly outperforms all algorithms on the protein classification problems

    From Imitation to Prediction, Data Compression vs Recurrent Neural Networks for Natural Language Processing

    Get PDF
    In recent studies [1][13][12] Recurrent Neural Networks were used for generative processes and their surprising performance can be explained by their ability to create good predictions. In addition, data compression is also based on predictions. What the problem comes down to is whether a data compressor could be used to perform as well as recurrent neural networks in natural language processing tasks. If this is possible,then the problem comes down to determining if a compression algorithm is even more intelligent than a neural network in specific tasks related to human language. In our journey we discovered what we think is the fundamental difference between a Data Compression Algorithm and a Recurrent Neural Network

    Caveats for information bottleneck in deterministic scenarios

    Full text link
    Information bottleneck (IB) is a method for extracting information from one random variable XX that is relevant for predicting another random variable YY. To do so, IB identifies an intermediate "bottleneck" variable TT that has low mutual information I(X;T)I(X;T) and high mutual information I(Y;T)I(Y;T). The "IB curve" characterizes the set of bottleneck variables that achieve maximal I(Y;T)I(Y;T) for a given I(X;T)I(X;T), and is typically explored by maximizing the "IB Lagrangian", I(Y;T)−βI(X;T)I(Y;T) - \beta I(X;T). In some cases, YY is a deterministic function of XX, including many classification problems in supervised learning where the output class YY is a deterministic function of the input XX. We demonstrate three caveats when using IB in any situation where YY is a deterministic function of XX: (1) the IB curve cannot be recovered by maximizing the IB Lagrangian for different values of β\beta; (2) there are "uninteresting" trivial solutions at all points of the IB curve; and (3) for multi-layer classifiers that achieve low prediction error, different layers cannot exhibit a strict trade-off between compression and prediction, contrary to a recent proposal. We also show that when YY is a small perturbation away from being a deterministic function of XX, these three caveats arise in an approximate way. To address problem (1), we propose a functional that, unlike the IB Lagrangian, can recover the IB curve in all cases. We demonstrate the three caveats on the MNIST dataset
    • …
    corecore