212 research outputs found
Source Coding with Fixed Lag Side Information
We consider source coding with fixed lag side information at the decoder. We
focus on the special case of perfect side information with unit lag
corresponding to source coding with feedforward (the dual of channel coding
with feedback) introduced by Pradhan. We use this duality to develop a linear
complexity algorithm which achieves the rate-distortion bound for any
memoryless finite alphabet source and distortion measure.Comment: 10 pages, 3 figure
On the Universality of the Logistic Loss Function
A loss function measures the discrepancy between the true values
(observations) and their estimated fits, for a given instance of data. A loss
function is said to be proper (unbiased, Fisher consistent) if the fits are
defined over a unit simplex, and the minimizer of the expected loss is the true
underlying probability of the data. Typical examples are the zero-one loss, the
quadratic loss and the Bernoulli log-likelihood loss (log-loss). In this work
we show that for binary classification problems, the divergence associated with
smooth, proper and convex loss functions is bounded from above by the
Kullback-Leibler (KL) divergence, up to a multiplicative normalization
constant. It implies that by minimizing the log-loss (associated with the KL
divergence), we minimize an upper bound to any choice of loss functions from
this set. This property justifies the broad use of log-loss in regression,
decision trees, deep neural networks and many other applications. In addition,
we show that the KL divergence bounds from above any separable Bregman
divergence that is convex in its second argument (up to a multiplicative
normalization constant). This result introduces a new set of divergence
inequalities, similar to the well-known Pinsker inequality
A refined analysis of the Poisson channel in the high-photon-efficiency regime
We study the discrete-time Poisson channel under the constraint that its
average input power (in photons per channel use) must not exceed some constant
E. We consider the wideband, high-photon-efficiency extreme where E approaches
zero, and where the channel's "dark current" approaches zero proportionally
with E. Improving over a previously obtained first-order capacity
approximation, we derive a refined approximation, which includes the exact
characterization of the second-order term, as well as an asymptotic
characterization of the third-order term with respect to the dark current. We
also show that pulse-position modulation is nearly optimal in this regime.Comment: Revised version to appear in IEEE Transactions on Information Theor
Authentication with Distortion Criteria
In a variety of applications, there is a need to authenticate content that
has experienced legitimate editing in addition to potential tampering attacks.
We develop one formulation of this problem based on a strict notion of
security, and characterize and interpret the associated information-theoretic
performance limits. The results can be viewed as a natural generalization of
classical approaches to traditional authentication. Additional insights into
the structure of such systems and their behavior are obtained by further
specializing the results to Bernoulli and Gaussian cases. The associated
systems are shown to be substantially better in terms of performance and/or
security than commonly advocated approaches based on data hiding and digital
watermarking. Finally, the formulation is extended to obtain efficient layered
authentication system constructions.Comment: 22 pages, 10 figure
Toward Photon-Efficient Key Distribution over Optical Channels
This work considers the distribution of a secret key over an optical
(bosonic) channel in the regime of high photon efficiency, i.e., when the
number of secret key bits generated per detected photon is high. While in
principle the photon efficiency is unbounded, there is an inherent tradeoff
between this efficiency and the key generation rate (with respect to the
channel bandwidth). We derive asymptotic expressions for the optimal generation
rates in the photon-efficient limit, and propose schemes that approach these
limits up to certain approximations. The schemes are practical, in the sense
that they use coherent or temporally-entangled optical states and direct
photodetection, all of which are reasonably easy to realize in practice, in
conjunction with off-the-shelf classical codes.Comment: In IEEE Transactions on Information Theory; same version except that
labels are corrected for Schemes S-1, S-2, and S-3, which appear as S-3, S-4,
and S-5 in the Transaction
A Simple Message-Passing Algorithm for Compressed Sensing
We consider the recovery of a nonnegative vector x from measurements y = Ax,
where A is an m-by-n matrix whos entries are in {0, 1}. We establish that when
A corresponds to the adjacency matrix of a bipartite graph with sufficient
expansion, a simple message-passing algorithm produces an estimate \hat{x} of x
satisfying ||x-\hat{x}||_1 \leq O(n/k) ||x-x(k)||_1, where x(k) is the best
k-sparse approximation of x. The algorithm performs O(n (log(n/k))^2 log(k))
computation in total, and the number of measurements required is m = O(k
log(n/k)). In the special case when x is k-sparse, the algorithm recovers x
exactly in time O(n log(n/k) log(k)). Ultimately, this work is a further step
in the direction of more formally developing the broader role of
message-passing algorithms in solving compressed sensing problems
Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication
We consider asynchronous point-to-point communication. Building on a recently
developed model, we show that training based schemes, i.e., communication
strategies that separate synchronization from information transmission, perform
suboptimally at high rate.Comment: To appear in the proceedings of the 2009 IEEE Information Theory
Workshop (Taormina
- …