5 research outputs found

    Compcrypt–lightweight ANS-based compression and encryption

    Get PDF
    Compression is widely used in Internet communication to save communication time and bandwidth. Recently invented by Jarek Duda asymmetric numeral system (ANS) offers an improved efficiency and a close to optimal compression. The ANS algorithm has been deployed by major IT companies such as Facebook, Google and Apple. Compression by itself does not provide any security (such as confidentiality or authentication of transmitted data). An obvious solution to this problem is an encryption of compressed bitstream. However, it requires two algorithms: one for compression and the other for encryption. In this work, we investigate natural properties of ANS that allow to incorporate authenticated encryption using as little cryptography as possible. We target low-level security communication such as transmission of data from IoT devices/sensors. In particular, we propose three solutions for joint compression and encryption (compcrypt). All of them use a pseudorandom bit generator (PRGB) based on lightweight stream ciphers. The first solution applies state jumps controlled by PRGB. The second one employs two ANS algorithms, where compression switches between the two. The switch is controlled by a PRGB bit. The third compcrypt modifies the encoding function of ANS depending on PRGB bits. Security and efficiency of the proposed compcrypt algorithms are evaluated

    GST: GPU-decodable supercompressed textures

    Get PDF
    Modern GPUs supporting compressed textures allow interactive application developers to save scarce GPU resources such as VRAM and bandwidth. Compressed textures use fixed compression ratios whose lossy representations are significantly poorer quality than traditional image compression formats such as JPEG. We present a new method in the class of supercompressed textures that provides an additional layer of compression to already compressed textures. Our texture representation is designed for endpoint compressed formats such as DXT and PVRTC and decoding on commodity GPUs. We apply our algorithm to commonly used formats by separating their representation into two parts that are processed independently and then entropy encoded. Our method preserves the CPU-GPU bandwidth during the decoding phase and exploits the parallelism of GPUs to provide up to 3X faster decode compared to prior texture supercompression algorithms. Along with the gains in decoding speed, our method maintains both the compression size and quality of current state of the art texture representations

    Improved Encoding for Compressed Textures

    Get PDF
    For the past few decades, graphics hardware has supported mapping a two dimensional image, or texture, onto a three dimensional surface to add detail during rendering. The complexity of modern applications using interactive graphics hardware have created an explosion of the amount of data needed to represent these images. In order to alleviate the amount of memory required to store and transmit textures, graphics hardware manufacturers have introduced hardware decompression units into the texturing pipeline. Textures may now be stored as compressed in memory and decoded at run-time in order to access the pixel data. In order to encode images to be used with these hardware features, many compression algorithms are run offline as a preprocessing step, often times the most time-consuming step in the asset preparation pipeline. This research presents several techniques to quickly serve compressed texture data. With the goal of interactive compression rates while maintaining compression quality, three algorithms are presented in the class of endpoint compression formats. The first uses intensity dilation to estimate compression parameters for low-frequency signal-modulated compressed textures and offers up to a 3X improvement in compression speed. The second, FasTC, shows that by estimating the final compression parameters, partition-based formats can choose an approximate partitioning and offer orders of magnitude faster encoding speed. The third, SegTC, shows additional improvement over selecting a partitioning by using a global segmentation to find the boundaries between image features. This segmentation offers an additional 2X improvement over FasTC while maintaining similar compressed quality. Also presented is a case study in using texture compression to benefit two dimensional concave path rendering. Compressing pixel coverage textures used for compositing yields both an increase in rendering speed and a decrease in storage overhead. Additionally an algorithm is presented that uses a single layer of indirection to adaptively select the block size compressed for each texture, giving a 2X increase in compression ratio for textures of mixed detail. Finally, a texture storage representation that is decoded at runtime on the GPU is presented. The decoded texture is still compressed for graphics hardware but uses 2X fewer bytes for storage and network bandwidth.Doctor of Philosoph

    On Statistical Data Compression

    Get PDF
    Im Zuge der stetigen Weiterentwicklung moderner Technik wächst die Menge an zu verarbeitenden Daten.Es gilt diese Daten zu verwalten, zu übertragen und zu speichern.Dafür ist Datenkompression unerlässlich.Gemessen an empirischen Kompressionsraten zählen Statistische Datenkompressionsalgorithmen zu den Besten.Diese Algorithmen verarbeiten einen Eingabetext buchstabenweise.Dabei verfährt man für jeden Buchstaben in zwei Phasen - Modellierung und Kodierung.Während der Modellierung schätzt ein Modell, basierend auf dem bereits bekannten Text, eine Wahrscheinlichkeitsverteilung für den nächsten Buchstaben.Ein Kodierer überführt die Verteilung und den Buchstaben in ein Codewort.Umgekehrt ermittelt der Dekodierer aus der Verteilung und dem Codewort den kodierten Buchstaben.Die Wahl des Modells bestimmt den statistischen Kompressionsalgorithmus, das Modell ist also von zentraler Bedeutung.Ein Modell mischt typischerweise viele einfache Wahrscheinlichkeitsschätzer.In der statistischen Datenkompression driften Theorie und Praxis auseinander.Theoretiker legen Wert auf Modelle, die mathematische Analysen zulassen, vernachlässigen aber Laufzeit, Speicherbedarf und empirische Verbesserungen;Praktiker verfolgen den gegenteiligen Ansatz.Die PAQ-Algorithmen haben die Überlegenheit des praktischen Ansatzes verdeutlicht.Diese Arbeit soll Theorie und Praxis annähren.Dazu wird das Handwerkszeug des Theoretikers, die Codelängenanlyse, auf Algorithmen des Praktikers angewendet.Es werden Wahrscheinlichkeitsschätzer, basierend auf gealterten relativen Häufigkeiten und basierend auf exponentiell geglätteten Wahrscheinlichkeiten, analysiert.Weitere Analysen decken Methoden ab, die Verteilungen durch gewichtetes arithmetisches und geometrisches Mitteln mischen und Gewichte mittels Gradientenverfahren bestimmen.Die Analysen zeigen, dass sich die betrachteten Verfahren ähnlich gut wie idealisierte Vergleichsverfahren verhalten.Methoden aus PAQ werden mit dieser Arbeit erweitert und mit einer theoretischen Basis versehen.Experimente stützen die Analyseergebnisse.Ein weiterer Beitrag dieser Arbeit ist Context Tree Mixing (CTM), eine Verallgemeinerung von Context Tree Weighting (CTW).Durch die Kombination von CTM mit Methoden aus PAQ entsteht ein theoretisch fundierter Kompressionsalgorithmus, der in Experimenten besser als CTW komprimiert.The ongoing evolution of hardware leads to a steady increase in the amount of data that is processed, transmitted and stored.Data compression is an essential tool to keep the amount of data manageable.In terms of empirical performance statistical data compression algorithms rank among the best.A statistical data compressor processes an input text letter by letter and compresses in two stages --- modeling and coding.During modeling a model estimates a probability distribution on the next letter based on the past input.During coding an encoder translates this distribution and the next letter into a codeword.Decoding reverts this process.The model is exchangeable and its choice determines a statistical data compression algorithm.All major models use a mixer to combine multiple simple probability estimators, so-called elementary models.In statistical data compression there is a gap between theory and practice.On the one hand, theoreticians put emphasis on models that allow for a mathematical analysis, but neglect running time and space considerations and empirical improvements.On the other hand practitioners focus on the very reverse.The family of PAQ statistical compressors demonstrated the superiority of the practitioner's approach in terms of empirical compression.With this thesis we attempt to bridge the aforementioned gap between theory and practice with special focus on PAQ.To achieve this we apply the theoretician's tools to practitioner's approaches:We provide a code length analysis for several practical modeling and mixing techniques.The analysis covers modeling by relative frequencies with frequency discount and modeling by exponential smoothing of probabilities.For mixing we consider linear and geometrically weighted averaging of probabilities with Online Gradient Descent for weight estimation.Our results show that the models and mixers we consider perform nearly as well as idealized competitors.Experiments support our analysis.Moreover, our results add a theoretical basis to modeling and mixing from PAQ and generalize methods from PAQ.Ultimately, we propose and analyze Context Tree Mixing (CTM), a generalization of Context Tree Weighting (CTW).We couple CTM with modeling and mixing techniques from PAQ and obtain a theoretically sound compression algorithm that improves over CTW, as shown in experiments
    corecore