81 research outputs found

    On Probability Estimation by Exponential Smoothing

    Full text link
    Probability estimation is essential for every statistical data compression algorithm. In practice probability estimation should be adaptive, recent observations should receive a higher weight than older observations. We present a probability estimation method based on exponential smoothing that satisfies this requirement and runs in constant time per letter. Our main contribution is a theoretical analysis in case of a binary alphabet for various smoothing rate sequences: We show that the redundancy w.r.t. a piecewise stationary model with ss segments is O(sn)O\left(s\sqrt n\right) for any bit sequence of length nn, an improvement over redundancy O(snlog⁥n)O\left(s\sqrt{n\log n}\right) of previous approaches with similar time complexity

    Mixing Strategies in Data Compression

    Full text link
    We propose geometric weighting as a novel method to combine multiple models in data compression. Our results reveal the rationale behind PAQ-weighting and generalize it to a non-binary alphabet. Based on a similar technique we present a new, generic linear mixture technique. All novel mixture techniques rely on given weight vectors. We consider the problem of finding optimal weights and show that the weight optimization leads to a strictly convex (and thus, good-natured) optimization problem. Finally, an experimental evaluation compares the two presented mixture techniques for a binary alphabet. The results indicate that geometric weighting is superior to linear weighting.Comment: Data Compression Conference (DCC) 201

    Linear and Geometric Mixtures - Analysis

    Full text link
    Linear and geometric mixtures are two methods to combine arbitrary models in data compression. Geometric mixtures generalize the empirically well-performing PAQ7 mixture. Both mixture schemes rely on weight vectors, which heavily determine their performance. Typically weight vectors are identified via Online Gradient Descent. In this work we show that one can obtain strong code length bounds for such a weight estimation scheme. These bounds hold for arbitrary input sequences. For this purpose we introduce the class of nice mixtures and analyze how Online Gradient Descent with a fixed step size combined with a nice mixture performs. These results translate to linear and geometric mixtures, which are nice, as we show. The results hold for PAQ7 mixtures as well, thus we provide the first theoretical analysis of PAQ7.Comment: Data Compression Conference (DCC) 201

    On Probability Estimation via Relative Frequencies and Discount

    Full text link
    Probability estimation is an elementary building block of every statistical data compression algorithm. In practice probability estimation is often based on relative letter frequencies which get scaled down, when their sum is too large. Such algorithms are attractive in terms of memory requirements, running time and practical performance. However, there still is a lack of theoretical understanding. In this work we formulate a typical probability estimation algorithm based on relative frequencies and frequency discount, Algorithm RFD. Our main contribution is its theoretical analysis. We show that the code length it requires above an arbitrary piecewise stationary model with bounded and unbounded letter probabilities is small. This theoretically confirms the recency effect of periodic frequency discount, which has often been observed empirically

    On Statistical Data Compression

    Get PDF
    ï»żIm Zuge der stetigen Weiterentwicklung moderner Technik wĂ€chst die Menge an zu verarbeitenden Daten.Es gilt diese Daten zu verwalten, zu ĂŒbertragen und zu speichern.DafĂŒr ist Datenkompression unerlĂ€sslich.Gemessen an empirischen Kompressionsraten zĂ€hlen Statistische Datenkompressionsalgorithmen zu den Besten.Diese Algorithmen verarbeiten einen Eingabetext buchstabenweise.Dabei verfĂ€hrt man fĂŒr jeden Buchstaben in zwei Phasen - Modellierung und Kodierung.WĂ€hrend der Modellierung schĂ€tzt ein Modell, basierend auf dem bereits bekannten Text, eine Wahrscheinlichkeitsverteilung fĂŒr den nĂ€chsten Buchstaben.Ein Kodierer ĂŒberfĂŒhrt die Verteilung und den Buchstaben in ein Codewort.Umgekehrt ermittelt der Dekodierer aus der Verteilung und dem Codewort den kodierten Buchstaben.Die Wahl des Modells bestimmt den statistischen Kompressionsalgorithmus, das Modell ist also von zentraler Bedeutung.Ein Modell mischt typischerweise viele einfache WahrscheinlichkeitsschĂ€tzer.In der statistischen Datenkompression driften Theorie und Praxis auseinander.Theoretiker legen Wert auf Modelle, die mathematische Analysen zulassen, vernachlĂ€ssigen aber Laufzeit, Speicherbedarf und empirische Verbesserungen;Praktiker verfolgen den gegenteiligen Ansatz.Die PAQ-Algorithmen haben die Überlegenheit des praktischen Ansatzes verdeutlicht.Diese Arbeit soll Theorie und Praxis annĂ€hren.Dazu wird das Handwerkszeug des Theoretikers, die CodelĂ€ngenanlyse, auf Algorithmen des Praktikers angewendet.Es werden WahrscheinlichkeitsschĂ€tzer, basierend auf gealterten relativen HĂ€ufigkeiten und basierend auf exponentiell geglĂ€tteten Wahrscheinlichkeiten, analysiert.Weitere Analysen decken Methoden ab, die Verteilungen durch gewichtetes arithmetisches und geometrisches Mitteln mischen und Gewichte mittels Gradientenverfahren bestimmen.Die Analysen zeigen, dass sich die betrachteten Verfahren Ă€hnlich gut wie idealisierte Vergleichsverfahren verhalten.Methoden aus PAQ werden mit dieser Arbeit erweitert und mit einer theoretischen Basis versehen.Experimente stĂŒtzen die Analyseergebnisse.Ein weiterer Beitrag dieser Arbeit ist Context Tree Mixing (CTM), eine Verallgemeinerung von Context Tree Weighting (CTW).Durch die Kombination von CTM mit Methoden aus PAQ entsteht ein theoretisch fundierter Kompressionsalgorithmus, der in Experimenten besser als CTW komprimiert.The ongoing evolution of hardware leads to a steady increase in the amount of data that is processed, transmitted and stored.Data compression is an essential tool to keep the amount of data manageable.In terms of empirical performance statistical data compression algorithms rank among the best.A statistical data compressor processes an input text letter by letter and compresses in two stages --- modeling and coding.During modeling a model estimates a probability distribution on the next letter based on the past input.During coding an encoder translates this distribution and the next letter into a codeword.Decoding reverts this process.The model is exchangeable and its choice determines a statistical data compression algorithm.All major models use a mixer to combine multiple simple probability estimators, so-called elementary models.In statistical data compression there is a gap between theory and practice.On the one hand, theoreticians put emphasis on models that allow for a mathematical analysis, but neglect running time and space considerations and empirical improvements.On the other hand practitioners focus on the very reverse.The family of PAQ statistical compressors demonstrated the superiority of the practitioner's approach in terms of empirical compression.With this thesis we attempt to bridge the aforementioned gap between theory and practice with special focus on PAQ.To achieve this we apply the theoretician's tools to practitioner's approaches:We provide a code length analysis for several practical modeling and mixing techniques.The analysis covers modeling by relative frequencies with frequency discount and modeling by exponential smoothing of probabilities.For mixing we consider linear and geometrically weighted averaging of probabilities with Online Gradient Descent for weight estimation.Our results show that the models and mixers we consider perform nearly as well as idealized competitors.Experiments support our analysis.Moreover, our results add a theoretical basis to modeling and mixing from PAQ and generalize methods from PAQ.Ultimately, we propose and analyze Context Tree Mixing (CTM), a generalization of Context Tree Weighting (CTW).We couple CTM with modeling and mixing techniques from PAQ and obtain a theoretically sound compression algorithm that improves over CTW, as shown in experiments
    • 

    corecore