293 research outputs found
Arithmetic coding revisited
Over the last decade, arithmetic coding has emerged as an important compression tool. It is now the method of choice for adaptive coding on multisymbol alphabets because of its speed,
low storage requirements, and effectiveness of compression. This article describes a new implementation of arithmetic coding that incorporates several improvements over a widely used earlier version by Witten, Neal, and Cleary, which has become a de facto standard. These improvements include fewer multiplicative operations, greatly extended range of alphabet sizes and symbol probabilities, and the use of low-precision arithmetic, permitting implementation by fast shift/add operations. We also describe a modular structure that separates the coding, modeling, and probability estimation components of a compression system. To motivate the improved coder, we consider the needs of a word-based text compression program. We report a range of experimental results using this and other models. Complete source code is available
Variants of SGD for Lipschitz Continuous Loss Functions in Low-Precision Environments
Motivated by neural network training in low-bit floating and fixed-point
environments, this work studies the convergence of variants of SGD with
computational error. Considering a general stochastic Lipschitz continuous loss
function, a novel convergence result to a Clarke stationary point is presented
assuming that only an approximation of its stochastic gradient can be computed
as well as error in computing the SGD step itself. Different variants of SGD
are then tested empirically in a variety of low-precision arithmetic
environments, with improved test set accuracy achieved compared to SGD for two
image recognition tasks
- …