104 research outputs found
The operational meaning of min- and max-entropy
We show that the conditional min-entropy Hmin(A|B) of a bipartite state
rho_AB is directly related to the maximum achievable overlap with a maximally
entangled state if only local actions on the B-part of rho_AB are allowed. In
the special case where A is classical, this overlap corresponds to the
probability of guessing A given B. In a similar vein, we connect the
conditional max-entropy Hmax(A|B) to the maximum fidelity of rho_AB with a
product state that is completely mixed on A. In the case where A is classical,
this corresponds to the security of A when used as a secret key in the presence
of an adversary holding B. Because min- and max-entropies are known to
characterize information-processing tasks such as randomness extraction and
state merging, our results establish a direct connection between these tasks
and basic operational problems. For example, they imply that the (logarithm of
the) probability of guessing A given B is a lower bound on the number of
uniform secret bits that can be extracted from A relative to an adversary
holding B.Comment: 12 pages, v2: no change in content, some typos corrected (including
the definition of fidelity in footnote 8), now closer to the published
versio
A precise bare simulation approach to the minimization of some distances. Foundations
In information theory -- as well as in the adjacent fields of statistics,
machine learning, artificial intelligence, signal processing and pattern
recognition -- many flexibilizations of the omnipresent Kullback-Leibler
information distance (relative entropy) and of the closely related Shannon
entropy have become frequently used tools. To tackle corresponding constrained
minimization (respectively maximization) problems by a newly developed
dimension-free bare (pure) simulation method, is the main goal of this paper.
Almost no assumptions (like convexity) on the set of constraints are needed,
within our discrete setup of arbitrary dimension, and our method is precise
(i.e., converges in the limit). As a side effect, we also derive an innovative
way of constructing new useful distances/divergences. To illustrate the core of
our approach, we present numerous examples. The potential for widespread
applicability is indicated, too; in particular, we deliver many recent
references for uses of the involved distances/divergences and entropies in
various different research fields (which may also serve as an interdisciplinary
interface)
Incremental Refinements and Multiple Descriptions with Feedback
It is well known that independent (separate) encoding of K correlated sources
may incur some rate loss compared to joint encoding, even if the decoding is
done jointly. This loss is particularly evident in the multiple descriptions
problem, where the sources are repetitions of the same source, but each
description must be individually good. We observe that under mild conditions
about the source and distortion measure, the rate ratio Rindependent(K)/Rjoint
goes to one in the limit of small rate/high distortion. Moreover, we consider
the excess rate with respect to the rate-distortion function, Rindependent(K,
M) - R(D), in M rounds of K independent encodings with a final distortion level
D. We provide two examples - a Gaussian source with mean-squared error and an
exponential source with one-sided error - for which the excess rate vanishes in
the limit as the number of rounds M goes to infinity, for any fixed D and K.
This result has an interesting interpretation for a multi-round variant of the
multiple descriptions problem, where after each round the encoder gets a
(block) feedback regarding which of the descriptions arrived: In the limit as
the number of rounds M goes to infinity (i.e., many incremental rounds), the
total rate of received descriptions approaches the rate-distortion function. We
provide theoretical and experimental evidence showing that this phenomenon is
in fact more general than in the two examples above.Comment: 62 pages. Accepted in the IEEE Transactions on Information Theor
From Quantum Source Compression to Quantum Thermodynamics
This thesis addresses problems in the field of quantum information theory.
The first part of the thesis is opened with concrete definitions of general
quantum source models and their compression, and each subsequent chapter
addresses the compression of a specific source model as a special case of the
initially defined general models. First, we find the optimal compression rate
of a general mixed state source which includes as special cases all the
previously studied models such as Schumacher's pure and ensemble sources and
other mixed state ensemble models. For an interpolation between the visible and
blind Schumacher's ensemble model, we find the optimal compression rate region
for the entanglement and quantum rates. Later, we study the classical-quantum
variation of the celebrated Slepian-Wolf problem and the ensemble model of
quantum state redistribution for which we find the optimal compression rate
considering per-copy fidelity and single-letter achievable and converse bounds
matching up to continuity of functions which appear in the corresponding
bounds.
The second part of the thesis revolves around information theoretical
perspective of quantum thermodynamics. We start with a resource theory point of
view of a quantum system with multiple non-commuting charges. Subsequently, we
apply this resource theory framework to study a traditional thermodynamics
setup with multiple non-commuting conserved quantities consisting of a main
system, a thermal bath and batteries to store various conserved quantities of
the system. We state the laws of the thermodynamics for this system, and show
that a purely quantum effect happens in some transformations of the system,
that is, some transformations are feasible only if there are quantum
correlations between the final state of the system and the thermal bath.Comment: PhD thesis, 176 page
- …