9 research outputs found
Better branch prediction through prophet/critic hybrids
The prophet/critic hybrid conditional branch predictor has two component predictors. The prophet uses a branch's history to predict its direction. We call this prediction and the ones for branches following it the branch future. The critic uses the branch's history and future to critique the prophet's prediction. The hybrid combines the prophet's prediction with the critique, either agrees or disagree, forming the branch's overall prediction. Results shows these hybrids can reduce mispredicts by 39 percent and improve processor performance by 7.8 percent.Peer ReviewedPostprint (published version
The predictability of data values
reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact
Dynamic Data Dependence Tracking and Its Application to Branch Prediction
To continue to improve processor performance, microarchitects seek to increase the effective instruction level parallelism (ILP) that can be exploited in applications. A fundamental limit to improving ILP is data dependences among instructions. If data dependence information is available at run-time, there are many uses to improve ILP. Prior published examples include decoupled branch exectuion architectures and critical instruction detection. In this paper, we describe an efficient hardware mechanism to dynamically track the data dependence chains of the instructions in the pipeline. This information is available on a cycle-by-cycle basis to the microengine for optimizing its perfromance. We then use this design in a new value-based branch prediction design using Available Register Value Information (ARVI). From the use of data dependence information, the ARVI branch predictor has better prediction accuracy over a comparably sized hybrid branch perdictor. With ARVI used as the second-level branch predictor, the improved prediction accuracy results in a 12.6% performance improvement on average across the SPEC95 integer benchmark suite
On Statistical Data Compression
Im Zuge der stetigen Weiterentwicklung moderner Technik wächst die Menge an
zu verarbeitenden Daten.Es gilt diese Daten zu verwalten, zu ĂĽbertragen und
zu speichern.Dafür ist Datenkompression unerlässlich.Gemessen an
empirischen Kompressionsraten zählen Statistische
Datenkompressionsalgorithmen zu den Besten.Diese Algorithmen verarbeiten
einen Eingabetext buchstabenweise.Dabei verfährt man für jeden Buchstaben
in zwei Phasen - Modellierung und Kodierung.Während der Modellierung
schätzt ein Modell, basierend auf dem bereits bekannten Text, eine
Wahrscheinlichkeitsverteilung für den nächsten Buchstaben.Ein Kodierer
ĂĽberfĂĽhrt die Verteilung und den Buchstaben in ein Codewort.Umgekehrt
ermittelt der Dekodierer aus der Verteilung und dem Codewort den kodierten
Buchstaben.Die Wahl des Modells bestimmt den statistischen
Kompressionsalgorithmus, das Modell ist also von zentraler Bedeutung.Ein
Modell mischt typischerweise viele einfache Wahrscheinlichkeitsschätzer.In
der statistischen Datenkompression driften Theorie und Praxis
auseinander.Theoretiker legen Wert auf Modelle, die mathematische Analysen
zulassen, vernachlässigen aber Laufzeit, Speicherbedarf und empirische
Verbesserungen;Praktiker verfolgen den gegenteiligen Ansatz.Die
PAQ-Algorithmen haben die Ăśberlegenheit des praktischen Ansatzes
verdeutlicht.Diese Arbeit soll Theorie und Praxis annähren.Dazu wird das
Handwerkszeug des Theoretikers, die Codelängenanlyse, auf Algorithmen des
Praktikers angewendet.Es werden Wahrscheinlichkeitsschätzer, basierend auf
gealterten relativen Häufigkeiten und basierend auf exponentiell
geglätteten Wahrscheinlichkeiten, analysiert.Weitere Analysen decken
Methoden ab, die Verteilungen durch gewichtetes arithmetisches und
geometrisches Mitteln mischen und Gewichte mittels Gradientenverfahren
bestimmen.Die Analysen zeigen, dass sich die betrachteten Verfahren ähnlich
gut wie idealisierte Vergleichsverfahren verhalten.Methoden aus PAQ werden
mit dieser Arbeit erweitert und mit einer theoretischen Basis
versehen.Experimente stĂĽtzen die Analyseergebnisse.Ein weiterer Beitrag
dieser Arbeit ist Context Tree Mixing (CTM), eine Verallgemeinerung von
Context Tree Weighting (CTW).Durch die Kombination von CTM mit Methoden aus
PAQ entsteht ein theoretisch fundierter Kompressionsalgorithmus, der in
Experimenten besser als CTW komprimiert.The ongoing evolution of hardware leads to a steady increase in the amount
of data that is processed, transmitted and stored.Data compression is an
essential tool to keep the amount of data manageable.In terms of empirical
performance statistical data compression algorithms rank among the best.A
statistical data compressor processes an input text letter by letter and
compresses in two stages --- modeling and coding.During modeling a model
estimates a probability distribution on the next letter based on the past
input.During coding an encoder translates this distribution and the next
letter into a codeword.Decoding reverts this process.The model is
exchangeable and its choice determines a statistical data compression
algorithm.All major models use a mixer to combine multiple simple
probability estimators, so-called elementary models.In statistical data
compression there is a gap between theory and practice.On the one hand,
theoreticians put emphasis on models that allow for a mathematical
analysis, but neglect running time and space considerations and empirical
improvements.On the other hand practitioners focus on the very reverse.The
family of PAQ statistical compressors demonstrated the superiority of the
practitioner's approach in terms of empirical compression.With this thesis
we attempt to bridge the aforementioned gap between theory and practice
with special focus on PAQ.To achieve this we apply the theoretician's tools
to practitioner's approaches:We provide a code length analysis for several
practical modeling and mixing techniques.The analysis covers modeling by
relative frequencies with frequency discount and modeling by exponential
smoothing of probabilities.For mixing we consider linear and geometrically
weighted averaging of probabilities with Online Gradient Descent for weight
estimation.Our results show that the models and mixers we consider perform
nearly as well as idealized competitors.Experiments support our
analysis.Moreover, our results add a theoretical basis to modeling and
mixing from PAQ and generalize methods from PAQ.Ultimately, we propose and
analyze Context Tree Mixing (CTM), a generalization of Context Tree
Weighting (CTW).We couple CTM with modeling and mixing techniques from PAQ
and obtain a theoretically sound compression algorithm that improves over
CTW, as shown in experiments
Analysis of Branch Prediction via Data Compression
Branch prediction is an important mechanism in modem microprocessor design. The focus of research in this area has been on designing new branch prediction schemes. In contrast, very few studies address the theoretical basis behind these prediction schemes. Knowing this theoretical basis helps us to evaluate how good a prediction scheme is and how much we can expect to improve its accuracy