103 research outputs found
Limit theorems for the sample entropy of hidden Markov chains
The Shannon-McMillan-Breiman theorem asserts that the sample entropy of a stationary and ergodic stochastic process converges to the entropy rate of the same process (as the sample size tends to infinity) almost surely. In this paper, we restrict our attention to the convergence behavior of the sample entropy of hidden Markov chains. Under certain positivity assumptions, we prove that a central limit theorem (CLT) with some Berry-Esseen bound for the sample entropy of a hidden Markov chain, and we use this CLT to establish a law of iterated logarithm (LIL) for the sample entropy. © 2011 IEEE.published_or_final_versionThe 2011 IEEE International Symposium on Information Theory (ISIT), St. Petersburg, Russia, 31 July-5 August 2011. In Proceedings of ISIT, 2011, p. 3009-301
Achieving the Capacity of any DMC using only Polar Codes
We construct a channel coding scheme to achieve the capacity of any discrete
memoryless channel based solely on the techniques of polar coding. In
particular, we show how source polarization and randomness extraction via
polarization can be employed to "shape" uniformly-distributed i.i.d. random
variables into approximate i.i.d. random variables distributed ac- cording to
the capacity-achieving distribution. We then combine this shaper with a variant
of polar channel coding, constructed by the duality with source coding, to
achieve the channel capacity. Our scheme inherits the low complexity encoder
and decoder of polar coding. It differs conceptually from Gallager's method for
achieving capacity, and we discuss the advantages and disadvantages of the two
schemes. An application to the AWGN channel is discussed.Comment: 9 pages, 7 figure
Decoding Cyclic Codes up to a New Bound on the Minimum Distance
A new lower bound on the minimum distance of q-ary cyclic codes is proposed.
This bound improves upon the Bose-Chaudhuri-Hocquenghem (BCH) bound and, for
some codes, upon the Hartmann-Tzeng (HT) bound. Several Boston bounds are
special cases of our bound. For some classes of codes the bound on the minimum
distance is refined. Furthermore, a quadratic-time decoding algorithm up to
this new bound is developed. The determination of the error locations is based
on the Euclidean Algorithm and a modified Chien search. The error evaluation is
done by solving a generalization of Forney's formula
Upper Bounds on the Capacity of Binary Channels with Causal Adversaries
In this work we consider the communication of information in the presence of
a causal adversarial jammer. In the setting under study, a sender wishes to
communicate a message to a receiver by transmitting a codeword
bit-by-bit over a communication channel. The sender and the receiver do not
share common randomness. The adversarial jammer can view the transmitted bits
one at a time, and can change up to a -fraction of them. However, the
decisions of the jammer must be made in a causal manner. Namely, for each bit
the jammer's decision on whether to corrupt it or not must depend only on
for . This is in contrast to the "classical" adversarial
jamming situations in which the jammer has no knowledge of , or
knows completely. In this work, we present upper bounds (that
hold under both the average and maximal probability of error criteria) on the
capacity which hold for both deterministic and stochastic encoding schemes.Comment: To appear in the IEEE Transactions on Information Theory; shortened
version appeared at ISIT 201
S-AMP: Approximate Message Passing for General Matrix Ensembles
In this work we propose a novel iterative estimation algorithm for linear
observation systems called S-AMP whose fixed points are the stationary points
of the exact Gibbs free energy under a set of (first- and second-) moment
consistency constraints in the large system limit. S-AMP extends the
approximate message-passing (AMP) algorithm to general matrix ensembles. The
generalization is based on the S-transform (in free probability) of the
spectrum of the measurement matrix. Furthermore, we show that the optimality of
S-AMP follows directly from its design rather than from solving a separate
optimization problem as done for AMP.Comment: 5 pages, 1 figur
Cognitive Wyner Networks with Clustered Decoding
We study an interference network where equally-numbered transmitters and
receivers lie on two parallel lines, each transmitter opposite its intended
receiver. We consider two short-range interference models: the "asymmetric
network," where the signal sent by each transmitter is interfered only by the
signal sent by its left neighbor (if present), and a "symmetric network," where
it is interfered by both its left and its right neighbors. Each transmitter is
cognizant of its own message, the messages of the transmitters to its
left, and the messages of the transmitters to its right. Each receiver
decodes its message based on the signals received at its own antenna, at the
receive antennas to its left, and the receive antennas to its
right. For such networks we provide upper and lower bounds on the multiplexing
gain, i.e., on the high-SNR asymptotic logarithmic growth of the sum-rate
capacity. In some cases our bounds meet, e.g., for the asymmetric network. Our
results exhibit an equivalence between the transmitter side-information
parameters and the receiver side-information parameters in the sense that increasing/decreasing or by a positive
integer has the same effect on the multiplexing gain as
increasing/decreasing or by . Moreover---even in
asymmetric networks---there is an equivalence between the left side-information
parameters and the right side-information parameters .Comment: Second revision submitted to IEEE Transactions on Information Theor
- …