5 research outputs found
A Parallel Two-Pass MDL Context Tree Algorithm for Universal Source Coding
We present a novel lossless universal source coding algorithm that uses
parallel computational units to increase the throughput. The length- input
sequence is partitioned into blocks. Processing each block independently of
the other blocks can accelerate the computation by a factor of , but
degrades the compression quality. Instead, our approach is to first estimate
the minimum description length (MDL) source underlying the entire input, and
then encode each of the blocks in parallel based on the MDL source. With
this two-pass approach, the compression loss incurred by using more parallel
units is insignificant. Our algorithm is work-efficient, i.e., its
computational complexity is . Its redundancy is approximately
bits above Rissanen's lower bound on universal coding performance,
with respect to any tree source whose maximal depth is at most
Discrete-Time Chaotic-Map Truly Random Number Generators: Design, Implementation, and Variability Analysis of the Zigzag Map
In this paper, we introduce a novel discrete chaotic map named zigzag map
that demonstrates excellent chaotic behaviors and can be utilized in Truly
Random Number Generators (TRNGs). We comprehensively investigate the map and
explore its critical chaotic characteristics and parameters. We further present
two circuit implementations for the zigzag map based on the switched current
technique as well as the current-mode affine interpolation of the breakpoints.
In practice, implementation variations can deteriorate the quality of the
output sequence as a result of variation of the chaotic map parameters. In
order to quantify the impact of variations on the map performance, we model the
variations using a combination of theoretical analysis and Monte-Carlo
simulations on the circuits. We demonstrate that even in the presence of the
map variations, a TRNG based on the zigzag map passes all of the NIST 800-22
statistical randomness tests using simple post processing of the output data.Comment: To appear in Analog Integrated Circuits and Signal Processing (ALOG
A Universal Parallel Two-Pass MDL Context Tree Compression Algorithm
Computing problems that handle large amounts of data necessitate the use of
lossless data compression for efficient storage and transmission. We present a
novel lossless universal data compression algorithm that uses parallel
computational units to increase the throughput. The length- input sequence
is partitioned into blocks. Processing each block independently of the
other blocks can accelerate the computation by a factor of , but degrades
the compression quality. Instead, our approach is to first estimate the minimum
description length (MDL) context tree source underlying the entire input, and
then encode each of the blocks in parallel based on the MDL source. With
this two-pass approach, the compression loss incurred by using more parallel
units is insignificant. Our algorithm is work-efficient, i.e., its
computational complexity is . Its redundancy is approximately
bits above Rissanen's lower bound on universal compression
performance, with respect to any context tree source whose maximal depth is at
most . We improve the compression by using different quantizers for
states of the context tree based on the number of symbols corresponding to
those states. Numerical results from a prototype implementation suggest that
our algorithm offers a better trade-off between compression and throughput than
competing universal data compression algorithms.Comment: Accepted to Journal of Selected Topics in Signal Processing special
issue on Signal Processing for Big Data (expected publication date June
2015). 10 pages double column, 6 figures, and 2 tables. arXiv admin note:
substantial text overlap with arXiv:1405.6322. Version: Mar 2015: Corrected a
typ
Results on the optimal memoryassisted universal compression performance for mixture sources
Abstract-In this paper, we consider the compression of a sequence from a mixture of K parametric sources. Each parametric source is represented by a d-dimensional parameter vector that is drawn from Jeffreys' prior. The output of the mixture source is a sequence of length n whose parameter is chosen from one of the K source parameter vectors uniformly at random. We are interested in the scenario in which the encoder and the decoder have a common side information of T sequences generated independently by the mixture source (which we refer to as memory-assisted universal compression problem). We derive the minimum average redundancy of the memoryassisted universal compression of a new random sequence from the mixture source and prove that when for some ǫ > 0, the side information provided by the previous sequences results in significant improvement over the universal compression without side information that is a function of n, T , and d. On the other hand, as K grows, the impact of the side information becomes negligible. Specifically, when for some ǫ > 0, optimal memory-assisted universal compression almost surely offers negligible improvement over the universal compression without side information