13 research outputs found
Secure Multiterminal Source Coding with Side Information at the Eavesdropper
The problem of secure multiterminal source coding with side information at
the eavesdropper is investigated. This scenario consists of a main encoder
(referred to as Alice) that wishes to compress a single source but
simultaneously satisfying the desired requirements on the distortion level at a
legitimate receiver (referred to as Bob) and the equivocation rate --average
uncertainty-- at an eavesdropper (referred to as Eve). It is further assumed
the presence of a (public) rate-limited link between Alice and Bob. In this
setting, Eve perfectly observes the information bits sent by Alice to Bob and
has also access to a correlated source which can be used as side information. A
second encoder (referred to as Charlie) helps Bob in estimating Alice's source
by sending a compressed version of its own correlated observation via a
(private) rate-limited link, which is only observed by Bob. For instance, the
problem at hands can be seen as the unification between the Berger-Tung and the
secure source coding setups. Inner and outer bounds on the so called
rates-distortion-equivocation region are derived. The inner region turns to be
tight for two cases: (i) uncoded side information at Bob and (ii) lossless
reconstruction of both sources at Bob --secure distributed lossless
compression. Application examples to secure lossy source coding of Gaussian and
binary sources in the presence of Gaussian and binary/ternary (resp.) side
informations are also considered. Optimal coding schemes are characterized for
some cases of interest where the statistical differences between the side
information at the decoders and the presence of a non-zero distortion at Bob
can be fully exploited to guarantee secrecy.Comment: 26 pages, 16 figures, 2 table
Network vector quantization
We present an algorithm for designing locally optimal vector quantizers for general networks. We discuss the algorithm's implementation and compare the performance of the resulting "network vector quantizers" to traditional vector quantizers (VQs) and to rate-distortion (R-D) bounds where available. While some special cases of network codes (e.g., multiresolution (MR) and multiple description (MD) codes) have been studied in the literature, we here present a unifying approach that both includes these existing solutions as special cases and provides solutions to previously unsolved examples
Caveats for information bottleneck in deterministic scenarios
Information bottleneck (IB) is a method for extracting information from one
random variable that is relevant for predicting another random variable
. To do so, IB identifies an intermediate "bottleneck" variable that has
low mutual information and high mutual information . The "IB
curve" characterizes the set of bottleneck variables that achieve maximal
for a given , and is typically explored by maximizing the "IB
Lagrangian", . In some cases, is a deterministic
function of , including many classification problems in supervised learning
where the output class is a deterministic function of the input . We
demonstrate three caveats when using IB in any situation where is a
deterministic function of : (1) the IB curve cannot be recovered by
maximizing the IB Lagrangian for different values of ; (2) there are
"uninteresting" trivial solutions at all points of the IB curve; and (3) for
multi-layer classifiers that achieve low prediction error, different layers
cannot exhibit a strict trade-off between compression and prediction, contrary
to a recent proposal. We also show that when is a small perturbation away
from being a deterministic function of , these three caveats arise in an
approximate way. To address problem (1), we propose a functional that, unlike
the IB Lagrangian, can recover the IB curve in all cases. We demonstrate the
three caveats on the MNIST dataset
Comparison of Channels: Criteria for Domination by a Symmetric Channel
This paper studies the basic question of whether a given channel can be
dominated (in the precise sense of being more noisy) by a -ary symmetric
channel. The concept of "less noisy" relation between channels originated in
network information theory (broadcast channels) and is defined in terms of
mutual information or Kullback-Leibler divergence. We provide an equivalent
characterization in terms of -divergence. Furthermore, we develop a
simple criterion for domination by a -ary symmetric channel in terms of the
minimum entry of the stochastic matrix defining the channel . The criterion
is strengthened for the special case of additive noise channels over finite
Abelian groups. Finally, it is shown that domination by a symmetric channel
implies (via comparison of Dirichlet forms) a logarithmic Sobolev inequality
for the original channel.Comment: 31 pages, 2 figures. Presented at 2017 IEEE International Symposium
on Information Theory (ISIT
Generalizing Capacity: New Definitions and Capacity Theorems for Composite Channels
We consider three capacity definitions for composite
channels with channel side information at the receiver. A composite
channel consists of a collection of different channels with a
distribution characterizing the probability that each channel is in
operation. The Shannon capacity of a channel is the highest rate
asymptotically achievable with arbitrarily small error probability.
Under this definition, the transmission strategy used to achieve
the capacity must achieve arbitrarily small error probability for
all channels in the collection comprising the composite channel.
The resulting capacity is dominated by the worst channel in its
collection, no matter how unlikely that channel is. We, therefore,
broaden the definition of capacity to allow for some outage.
The capacity versus outage is the highest rate asymptotically
achievable with a given probability of decoder-recognized outage.
The expected capacity is the highest average rate asymptotically
achievable with a single encoder and multiple decoders, where
channel side information determines the channel in use. The
expected capacity is a generalization of capacity versus outage
since codes designed for capacity versus outage decode at one of
two rates (rate zero when the channel is in outage and the target
rate otherwise) while codes designed for expected capacity can
decode at many rates. Expected capacity equals Shannon capacity
for channels governed by a stationary ergodic random process
but is typically greater for general channels. The capacity versus
outage and expected capacity definitions relax the constraint that
all transmitted information must be decoded at the receiver. We
derive channel coding theorems for these capacity definitions
through information density and provide numerical examples to
highlight their connections and differences. We also discuss the
implications of these alternative capacity definitions for end-to-end
distortion, source-channel coding, and separation