1,865 research outputs found
Orthogonal Codes for Robust Low-Cost Communication
Orthogonal coding schemes, known to asymptotically achieve the capacity per
unit cost (CPUC) for single-user ergodic memoryless channels with a zero-cost
input symbol, are investigated for single-user compound memoryless channels,
which exhibit uncertainties in their input-output statistical relationships. A
minimax formulation is adopted to attain robustness. First, a class of
achievable rates per unit cost (ARPUC) is derived, and its utility is
demonstrated through several representative case studies. Second, when the
uncertainty set of channel transition statistics satisfies a convexity
property, optimization is performed over the class of ARPUC through utilizing
results of minimax robustness. The resulting CPUC lower bound indicates the
ultimate performance of the orthogonal coding scheme, and coincides with the
CPUC under certain restrictive conditions. Finally, still under the convexity
property, it is shown that the CPUC can generally be achieved, through
utilizing a so-called mixed strategy in which an orthogonal code contains an
appropriate composition of different nonzero-cost input symbols.Comment: 2nd revision, accepted for publicatio
Arithmetic coding revisited
Over the last decade, arithmetic coding has emerged as an important compression tool. It is now the method of choice for adaptive coding on multisymbol alphabets because of its speed,
low storage requirements, and effectiveness of compression. This article describes a new implementation of arithmetic coding that incorporates several improvements over a widely used earlier version by Witten, Neal, and Cleary, which has become a de facto standard. These improvements include fewer multiplicative operations, greatly extended range of alphabet sizes and symbol probabilities, and the use of low-precision arithmetic, permitting implementation by fast shift/add operations. We also describe a modular structure that separates the coding, modeling, and probability estimation components of a compression system. To motivate the improved coder, we consider the needs of a word-based text compression program. We report a range of experimental results using this and other models. Complete source code is available
Distributed Binary Detection with Lossy Data Compression
Consider the problem where a statistician in a two-node system receives
rate-limited information from a transmitter about marginal observations of a
memoryless process generated from two possible distributions. Using its own
observations, this receiver is required to first identify the legitimacy of its
sender by declaring the joint distribution of the process, and then depending
on such authentication it generates the adequate reconstruction of the
observations satisfying an average per-letter distortion. The performance of
this setup is investigated through the corresponding rate-error-distortion
region describing the trade-off between: the communication rate, the error
exponent induced by the detection and the distortion incurred by the source
reconstruction. In the special case of testing against independence, where the
alternative hypothesis implies that the sources are independent, the optimal
rate-error-distortion region is characterized. An application example to binary
symmetric sources is given subsequently and the explicit expression for the
rate-error-distortion region is provided as well. The case of "general
hypotheses" is also investigated. A new achievable rate-error-distortion region
is derived based on the use of non-asymptotic binning, improving the quality of
communicated descriptions. Further improvement of performance in the general
case is shown to be possible when the requirement of source reconstruction is
relaxed, which stands in contrast to the case of general hypotheses.Comment: to appear on IEEE Trans. Information Theor
Information-Theoretic Foundations of Mismatched Decoding
Shannon's channel coding theorem characterizes the maximal rate of
information that can be reliably transmitted over a communication channel when
optimal encoding and decoding strategies are used. In many scenarios, however,
practical considerations such as channel uncertainty and implementation
constraints rule out the use of an optimal decoder. The mismatched decoding
problem addresses such scenarios by considering the case that the decoder
cannot be optimized, but is instead fixed as part of the problem statement.
This problem is not only of direct interest in its own right, but also has
close connections with other long-standing theoretical problems in information
theory. In this monograph, we survey both classical literature and recent
developments on the mismatched decoding problem, with an emphasis on achievable
random-coding rates for memoryless channels. We present two widely-considered
achievable rates known as the generalized mutual information (GMI) and the LM
rate, and overview their derivations and properties. In addition, we survey
several improved rates via multi-user coding techniques, as well as recent
developments and challenges in establishing upper bounds on the mismatch
capacity, and an analogous mismatched encoding problem in rate-distortion
theory. Throughout the monograph, we highlight a variety of applications and
connections with other prominent information theory problems.Comment: Published in Foundations and Trends in Communications and Information
Theory (Volume 17, Issue 2-3
Convolutional Radio Modulation Recognition Networks
We study the adaptation of convolutional neural networks to the complex
temporal radio signal domain. We compare the efficacy of radio modulation
classification using naively learned features against using expert features
which are widely used in the field today and we show significant performance
improvements. We show that blind temporal learning on large and densely encoded
time series using deep convolutional neural networks is viable and a strong
candidate approach for this task especially at low signal to noise ratio
An Assistive Multimedia Courseware for Dyslexics
One of the most promising areas of education is the development of computer-based
teaching materials, especially interactive multimedia programs. Interactive multimedia allows
independent and interactive learning, and yet presents the learning information to the learners in
newly engaging and meaningful ways. This paper delivers the theoretical concepts and design of a
multimedia courseware called âMyLexicâ. âMyLexicâ is the first learning tool to nurture interest on
Malay language basic reading among preschool dyslexic children in Malaysia. The theoretical
framework proposed in the study is based on research in dyslexia theory with Dual Coding Theory,
Structured Multi-sensory Phonic Teaching and Scaffolding instructional technique. Detail
explanations on its learning content are also discussed. The courseware is hoped to contribute a
significant idea to the development of technology in Malay language education for dyslexics in
Malaysia
Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping
In this paper, we provide for the first time a systematic comparison of
distribution matching (DM) and sphere shaping (SpSh) algorithms for short
blocklength probabilistic amplitude shaping. For asymptotically large
blocklengths, constant composition distribution matching (CCDM) is known to
generate the target capacity-achieving distribution. As the blocklength
decreases, however, the resulting rate loss diminishes the efficiency of CCDM.
We claim that for such short blocklengths and over the additive white Gaussian
channel (AWGN), the objective of shaping should be reformulated as obtaining
the most energy-efficient signal space for a given rate (rather than matching
distributions). In light of this interpretation, multiset-partition DM (MPDM),
enumerative sphere shaping (ESS) and shell mapping (SM), are reviewed as
energy-efficient shaping techniques. Numerical results show that MPDM and SpSh
have smaller rate losses than CCDM. SpSh--whose sole objective is to maximize
the energy efficiency--is shown to have the minimum rate loss amongst all. We
provide simulation results of the end-to-end decoding performance showing that
up to 1 dB improvement in power efficiency over uniform signaling can be
obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a
discussion on the complexity of these algorithms from the perspective of
latency, storage and computations.Comment: 18 pages, 10 figure
Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities
This monograph presents a unified treatment of single- and multi-user
problems in Shannon's information theory where we depart from the requirement
that the error probability decays asymptotically in the blocklength. Instead,
the error probabilities for various problems are bounded above by a
non-vanishing constant and the spotlight is shone on achievable coding rates as
functions of the growing blocklengths. This represents the study of asymptotic
estimates with non-vanishing error probabilities.
In Part I, after reviewing the fundamentals of information theory, we discuss
Strassen's seminal result for binary hypothesis testing where the type-I error
probability is non-vanishing and the rate of decay of the type-II error
probability with growing number of independent observations is characterized.
In Part II, we use this basic hypothesis testing result to develop second- and
sometimes, even third-order asymptotic expansions for point-to-point
communication. Finally in Part III, we consider network information theory
problems for which the second-order asymptotics are known. These problems
include some classes of channels with random state, the multiple-encoder
distributed lossless source coding (Slepian-Wolf) problem and special cases of
the Gaussian interference and multiple-access channels. Finally, we discuss
avenues for further research.Comment: Further comments welcom
- âŠ