17 research outputs found

    Source and channel coding using Fountain codes

    Get PDF
    The invention of Fountain codes is a major advance in the field of error correcting codes. The goal of this work is to study and develop algorithms for source and channel coding using a family of Fountain codes known as Raptor codes. From an asymptotic point of view, the best currently known sum-product decoding algorithm for non binary alphabets has a high complexity that limits its use in practice. For binary channels, sum-product decoding algorithms have been extensively studied and are known to perform well. In the first part of this work, we develop a decoding algorithm for binary codes on non-binary channels based on a combination of sum-product and maximum-likelihood decoding. We apply this algorithm to Raptor codes on both symmetric and non-symmetric channels. Our algorithm shows the best performance in terms of complexity and error rate per symbol for blocks of finite length for symmetric channels. Then, we examine the performance of Raptor codes under sum-product decoding when the transmission is taking place on piecewise stationary memoryless channels and on channels with memory corrupted by noise. We develop algorithms for joint estimation and detection while simultaneously employing expectation maximization to estimate the noise, and sum-product algorithm to correct errors. We also develop a hard decision algorithm for Raptor codes on piecewise stationary memoryless channels. Finally, we generalize our joint LT estimation-decoding algorithms for Markov-modulated channels. In the third part of this work, we develop compression algorithms using Raptor codes. More specifically we introduce a lossless text compression algorithm, obtaining in this way competitive results compared to the existing classical approaches. Moreover, we propose distributed source coding algorithms based on the paradigm proposed by Slepian and Wolf

    Universal homophonic coding

    Get PDF
    Redundancy in plaintext is a fertile source of attack in any encryption system. Compression before encryption reduces the redundancy in the plaintext, but this does not make a cipher more secure. The cipher text is still susceptible to known-plaintext and chosen-plaintext attacks. The aim of homophonic coding is to convert a plaintext source into a random sequence by randomly mapping each source symbol into one of a set of homophones. Each homophone is then encoded by a source coder after which it can be encrypted with a cryptographic system. The security of homophonic coding falls into the class of unconditionally secure ciphers. The main advantage of homophonic coding over pure source coding is that it provides security both against known-plaintext and chosen-plaintext attacks, whereas source coding merely protects against a ciphertext-only attack. The aim of this dissertation is to investigate the implementation of an adaptive homophonic coder based on an arithmetic coder. This type of homophonic coding is termed universal, as it is not dependent on the source statistics.Computer ScienceM.Sc. (Computer Science

    Context-based compression algorithms for text and image data.

    Get PDF
    Wong Ling.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references (leaves 80-85).ABSTRACT --- p.1Chapter 1. --- INTRODUCTION --- p.2Chapter 1.1 --- motivation --- p.4Chapter 1.2 --- Original Contributions --- p.5Chapter 1.3 --- thesis Structure --- p.5Chapter 2. --- BACKGROUND --- p.7Chapter 2.1 --- information theory --- p.7Chapter 2.2 --- early compression --- p.8Chapter 2.2.1 --- Some Source Codes --- p.10Chapter 2.2.1.1 --- Huffman Code --- p.10Chapter 2.2.1.2 --- Tutstall Code --- p.10Chapter 2.2.1.3 --- Arithmetic Code --- p.11Chapter 2.3 --- modern techniques for compression --- p.14Chapter 2.3.1 --- Statistical Modeling --- p.14Chapter 2.3.1.1 --- Context Modeling --- p.15Chapter 2.3.1.2 --- State Based Modeling --- p.17Chapter 2.3.2 --- Dictionary Based Compression --- p.17Chapter 2.3.2.1 --- LZ-compression --- p.19Chapter 2.3.3 --- Other Compression Techniques --- p.20Chapter 2.3.3.1 --- Block Sorting --- p.20Chapter 2.3.3.2 --- Context Tree Weighting --- p.21Chapter 3. --- SYMBOL REMAPPING --- p.22Chapter 3. 1 --- reviews on Block Sorting --- p.22Chapter 3.1.1 --- Forward Transformation --- p.23Chapter 3.1.2 --- Inverse Transformation --- p.24Chapter 3.2 --- Ordering Method --- p.25Chapter 3.3 --- discussions --- p.27Chapter 4. --- CONTENT PREDICTION --- p.29Chapter 4.1 --- Prediction and Ranking Schemes --- p.29Chapter 4.1.1 --- Content Predictor --- p.29Chapter 4.1.2 --- Ranking Techn ique --- p.30Chapter 4.2 --- Reviews on Context Sorting --- p.31Chapter 4.2.1 --- Context Sorting basis --- p.31Chapter 4.3 --- General Framework of Content Prediction --- p.31Chapter 4.3.1 --- A Baseline Version --- p.32Chapter 4.3.2 --- Context Length Merge --- p.34Chapter 4.4 --- Discussions --- p.36Chapter 5. --- BOUNDED-LENGTH BLOCK SORTING --- p.38Chapter 5.1 --- block sorting with bounded context length --- p.38Chapter 5.1.1 --- Forward Transformation --- p.38Chapter 5.1.2 --- Reverse Transformation --- p.39Chapter 5.2 --- Locally Adaptive Entropy Coding --- p.43Chapter 5.3 --- discussion --- p.45Chapter 6. --- CONTEXT CODING FOR IMAGE DATA --- p.47Chapter 6.1 --- Digital Images --- p.47Chapter 6.1.1 --- Redundancy --- p.48Chapter 6.2 --- model of a compression system --- p.49Chapter 6.2.1 --- Representation --- p.49Chapter 6.2.2 --- Quantization --- p.50Chapter 6.2.3 --- Lossless coding --- p.51Chapter 6.3 --- The Embedded Zerotree Wavelet Coding --- p.51Chapter 6.3.1 --- Simple Zerotree-like Implementation --- p.53Chapter 6.3.2 --- Analysis of Zerotree Coding --- p.54Chapter 6.3.2.1 --- Linkage between Coefficients --- p.55Chapter 6.3.2.2 --- Design of Uniform Threshold Quantizer with Dead Zone --- p.58Chapter 6.4 --- Extensions on Wavelet Coding --- p.59Chapter 6.4.1 --- Coefficients Scanning --- p.60Chapter 6.5 --- Discussions --- p.61Chapter 7. --- CONCLUSIONS --- p.63Chapter 7.1 --- Future Research --- p.64APPENDIX --- p.65Chapter A --- Lossless Compression Results --- p.65Chapter B --- Image Compression Standards --- p.72Chapter C --- human Visual System Characteristics --- p.75Chapter D --- Lossy Compression Results --- p.76COMPRESSION GALLERY --- p.77Context-based Wavelet Coding --- p.75RD-OPT-based jpeg Compression --- p.76SPIHT Wavelet Compression --- p.77REFERENCES --- p.8

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Temperature aware power optimization for multicore floating-point units

    Full text link

    MIMO transmission for 4G wireless communications

    Get PDF
    Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200

    The determination of non-stationary random vibration response characteristics by numerical simulation techniques

    Get PDF
    The technique of sample averaging is considered for application to the non-stationary vibration problem associated with road vehicle ride. Time history realisations of the vehicle response are achieved by a discretised Iumped parameter model idealisation simulated on a digital computer. Sets of realisation histories are collated to obtain the overal statistical response characteristics. The road vehicle ride problem is the result of random road roughness exciting the vehicle as it traverses the surface. This dynamic excitation may be considered as a stationary function of time, provided the vehicle traverse velocity does not vary. Under variable velocity conditions the excitation is a non-stationary function of time. It is the solution of this non-stationary accelerating vehicle problem which is the subject of this study. An alternative method of solution for the non-stationary vehicle problem has already been achieved. This alternative, like sample averaging, places heavy emphasis on the use of numerical methods on a digital computer for the evaluation of results. Unlike sample averaging, it is not normally applicable to road vehicles which possess significant non-linear dynamic characteristics in their suspension configuration. Ultimately the objective of this thesis is to make a comparative appraisal of the viability of sample averaging as a general means of determining the non-stationary response characteristics of road vehicles. To permit full justification of the technique and thereby ensure flexibility of application, it is imperative that all methods of digital simulation are scrutinised prior to implementation. In essence the simulator consists of two distinct numerical modules. One module is concerned with the generation of a large sample of statistically independent road surface profile realisations, while the other applies itself to analysing vehicle response. The additional problems encountered when interfacing the two modules are also fully investigated. Upon implementation, the simulator proves itself a flexible and viable tool for the solution of the non-stationary problem while providing some surprisingly new observations
    corecore