7 research outputs found
Extrinsic Jensen-Shannon Divergence: Applications to Variable-Length Coding
This paper considers the problem of variable-length coding over a discrete
memoryless channel (DMC) with noiseless feedback. The paper provides a
stochastic control view of the problem whose solution is analyzed via a newly
proposed symmetrized divergence, termed extrinsic Jensen-Shannon (EJS)
divergence. It is shown that strictly positive lower bounds on EJS divergence
provide non-asymptotic upper bounds on the expected code length. The paper
presents strictly positive lower bounds on EJS divergence, and hence
non-asymptotic upper bounds on the expected code length, for the following two
coding schemes: variable-length posterior matching and MaxEJS coding scheme
which is based on a greedy maximization of the EJS divergence.
As an asymptotic corollary of the main results, this paper also provides a
rate-reliability test. Variable-length coding schemes that satisfy the
condition(s) of the test for parameters and , are guaranteed to achieve
rate and error exponent . The results are specialized for posterior
matching and MaxEJS to obtain deterministic one-phase coding schemes achieving
capacity and optimal error exponent. For the special case of symmetric
binary-input channels, simpler deterministic schemes of optimal performance are
proposed and analyzed.Comment: 17 pages (two-column), 4 figures, to appear in IEEE Transactions on
Information Theor
Recommended from our members
Belief Refinement Approaches to Communication and Inference Problems
This dissertation considers a problem where a single agent or a group of agents aim to estimate/learn unknown (possibly time-varying) parameters of interest despite making noisy observations. The agents take a Bayesian-like approach by maintaining a posterior probability distribution or “belief" over a parameter space conditioned on past observations. The agents aim to iteratively refine their belief over the parameter space as new information is acquired from their private observations or through collaboration with other agents. In particular, the agents aim to ensure that sufficient belief is assigned in neighborhoods centered around the true parameter with high probability or “reliability". In the context of communication problems considered in this dissertation, the agents may be active, i.e., agents may additionally take actions which provide new observations. Furthermore, agents may employ an adaptive strategy, i.e., using their past actions and the resulting observations, agents can adaptively choose actions to control the concentration of the belief. When the agents are active, we propose and analyze adaptive belief refinement approaches to obtain belief concentration on the unknown parameter with high reliability. In a different context, namely that of decentralized inference, we consider passive agents. Here, agents face an additional challenge due to the statistical insufficiency of their private observations to learn the unknown parameter. While individual agents’ observations are not informative enough, we assume that the agents’ observations are collectively informative to learn the unknown parameter. Here, we propose and analyze decentralized belief refining strategies to collaboratively obtain belief concentration on the unknown parameter. In the first part of this dissertation, we consider active strategies that are extensions of the posterior matching strategy (PM) introduced by Horstein, which is a generalization of the well-known binary search algorithm. We propose and analyze PM based strategies in the context of modern communication systems, namely the problem of establishing initial access in mm-Wave communication and spectrum sensing for Cognitive Radio. We propose and analyze channel coding strategies for real-time streaming and control applications. The second part of the dissertation investigates the belief refinement approaches for decentralized learning. In particular, it focusing on developing and analyzing a decentralized learning rule for statistical hypothesis testing and its application to decentralized machine learning