191 research outputs found
Random Access Channel Coding in the Finite Blocklength Regime
Consider a random access communication scenario over a channel whose
operation is defined for any number of possible transmitters. Inspired by the
model recently introduced by Polyanskiy for the Multiple Access Channel (MAC)
with a fixed, known number of transmitters, we assume that the channel is
invariant to permutations on its inputs, and that all active transmitters
employ identical encoders. Unlike Polyanskiy, we consider a scenario where
neither the transmitters nor the receiver know which transmitters are active.
We refer to this agnostic communication setup as the Random Access Channel, or
RAC. Scheduled feedback of a finite number of bits is used to synchronize the
transmitters. The decoder is tasked with determining from the channel output
the number of active transmitters () and their messages but not which
transmitter sent which message. The decoding procedure occurs at a time
depending on the decoder's estimate of the number of active transmitters,
, thereby achieving a rate that varies with the number of active
transmitters. Single-bit feedback at each time , enables all
transmitters to determine the end of one coding epoch and the start of the
next. The central result of this work demonstrates the achievability on a RAC
of performance that is first-order optimal for the MAC in operation during each
coding epoch. While prior multiple access schemes for a fixed number of
transmitters require simultaneous threshold rules, the proposed
scheme uses a single threshold rule and achieves the same dispersion.Comment: Presented at ISIT18', submitted to IEEE Transactions on Information
Theor
Deterministic Rateless Codes for BSC
A rateless code encodes a finite length information word into an infinitely
long codeword such that longer prefixes of the codeword can tolerate a larger
fraction of errors. A rateless code achieves capacity for a family of channels
if, for every channel in the family, reliable communication is obtained by a
prefix of the code whose rate is arbitrarily close to the channel's capacity.
As a result, a universal encoder can communicate over all channels in the
family while simultaneously achieving optimal communication overhead. In this
paper, we construct the first \emph{deterministic} rateless code for the binary
symmetric channel. Our code can be encoded and decoded in time per
bit and in almost logarithmic parallel time of , where
is any (arbitrarily slow) super-constant function. Furthermore, the error
probability of our code is almost exponentially small .
Previous rateless codes are probabilistic (i.e., based on code ensembles),
require polynomial time per bit for decoding, and have inferior asymptotic
error probabilities. Our main technical contribution is a constructive proof
for the existence of an infinite generating matrix that each of its prefixes
induce a weight distribution that approximates the expected weight distribution
of a random linear code
Rateless Coding for Gaussian Channels
A rateless code-i.e., a rate-compatible family of codes-has the property that
codewords of the higher rate codes are prefixes of those of the lower rate
ones. A perfect family of such codes is one in which each of the codes in the
family is capacity-achieving. We show by construction that perfect rateless
codes with low-complexity decoding algorithms exist for additive white Gaussian
noise channels. Our construction involves the use of layered encoding and
successive decoding, together with repetition using time-varying layer weights.
As an illustration of our framework, we design a practical three-rate code
family. We further construct rich sets of near-perfect rateless codes within
our architecture that require either significantly fewer layers or lower
complexity than their perfect counterparts. Variations of the basic
construction are also developed, including one for time-varying channels in
which there is no a priori stochastic model.Comment: 18 page
Precoded Integer-Forcing Universally Achieves the MIMO Capacity to Within a Constant Gap
An open-loop single-user multiple-input multiple-output communication scheme
is considered where a transmitter, equipped with multiple antennas, encodes the
data into independent streams all taken from the same linear code. The coded
streams are then linearly precoded using the encoding matrix of a perfect
linear dispersion space-time code. At the receiver side, integer-forcing
equalization is applied, followed by standard single-stream decoding. It is
shown that this communication architecture achieves the capacity of any
Gaussian multiple-input multiple-output channel up to a gap that depends only
on the number of transmit antennas.Comment: to appear in the IEEE Transactions on Information Theor
Expanding window fountain codes for unequal error protection
A novel approach to provide unequal error protection (UEP) using rateless codes over erasure channels, named Expanding Window Fountain (EWF) codes, is developed and discussed. EWF codes use a windowing technique rather than a weighted (non-uniform) selection of input symbols to achieve UEP property. The windowing approach introduces additional parameters in the UEP rateless code design, making it more general and flexible than the weighted approach. Furthermore, the windowing approach provides better performance of UEP scheme, which is confirmed both theoretically and experimentally
Expanding window fountain codes for unequal error protection
A novel approach to provide unequal error protection (UEP) using rateless codes over erasure channels, named Expanding Window Fountain (EWF) codes, is developed and discussed. EWF codes use a windowing technique rather than a weighted (non-uniform) selection of input symbols to achieve UEP property. The windowing approach introduces additional parameters in the UEP rateless code design, making it more general and flexible than the weighted approach. Furthermore, the windowing approach provides better performance of UEP scheme, which is confirmed both theoretically and experimentally. © 2009 IEEE
Zero-rate feedback can achieve the empirical capacity
The utility of limited feedback for coding over an individual sequence of
DMCs is investigated. This study complements recent results showing how limited
or noisy feedback can boost the reliability of communication. A strategy with
fixed input distribution is given that asymptotically achieves rates
arbitrarily close to the mutual information induced by and the
state-averaged channel. When the capacity achieving input distribution is the
same over all channel states, this achieves rates at least as large as the
capacity of the state averaged channel, sometimes called the empirical
capacity.Comment: Revised version of paper originally submitted to IEEE Transactions on
Information Theory, Nov. 2007. This version contains further revisions and
clarification
- …