6,660 research outputs found
Constructions of Rank Modulation Codes
Rank modulation is a way of encoding information to correct errors in flash
memory devices as well as impulse noise in transmission lines. Modeling rank
modulation involves construction of packings of the space of permutations
equipped with the Kendall tau distance.
We present several general constructions of codes in permutations that cover
a broad range of code parameters. In particular, we show a number of ways in
which conventional error-correcting codes can be modified to correct errors in
the Kendall space. Codes that we construct afford simple encoding and decoding
algorithms of essentially the same complexity as required to correct errors in
the Hamming metric. For instance, from binary BCH codes we obtain codes
correcting Kendall errors in memory cells that support the order of
messages, for any constant We also construct
families of codes that correct a number of errors that grows with at
varying rates, from to . One of our constructions
gives rise to a family of rank modulation codes for which the trade-off between
the number of messages and the number of correctable Kendall errors approaches
the optimal scaling rate. Finally, we list a number of possibilities for
constructing codes of finite length, and give examples of rank modulation codes
with specific parameters.Comment: Submitted to IEEE Transactions on Information Theor
Study and simulation of low rate video coding schemes
The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design
Practical Full Resolution Learned Lossless Image Compression
We propose the first practical learned lossless image compression system,
L3C, and show that it outperforms the popular engineered codecs, PNG, WebP and
JPEG 2000. At the core of our method is a fully parallelizable hierarchical
probabilistic model for adaptive entropy coding which is optimized end-to-end
for the compression task. In contrast to recent autoregressive discrete
probabilistic models such as PixelCNN, our method i) models the image
distribution jointly with learned auxiliary representations instead of
exclusively modeling the image distribution in RGB space, and ii) only requires
three forward-passes to predict all pixel probabilities instead of one for each
pixel. As a result, L3C obtains over two orders of magnitude speedups when
sampling compared to the fastest PixelCNN variant (Multiscale-PixelCNN).
Furthermore, we find that learning the auxiliary representation is crucial and
outperforms predefined auxiliary representations such as an RGB pyramid
significantly.Comment: Updated preprocessing and Table 1, see A.1 in supplementary. Code and
models: https://github.com/fab-jul/L3C-PyTorc
- …