6,718 research outputs found
A Uniform Min-Max Theorem with Applications in Cryptography
We present a new, more constructive proof of von Neumann’s Min-Max Theorem for two-player zero-sum game — specifically, an algorithm that builds a near-optimal mixed strategy for the second player from several best-responses of the second player to mixed strategies of the first player. The algorithm extends previous work of Freund and Schapire (Games and Economic Behavior ’99) with the advantage that the algorithm runs in poly(n) time even when a pure strategy for the first player is a distribution chosen from a set of distributions over {0, 1} . This extension enables a number of additional applications in cryptography and complexity theory, often yielding uniform security versions of results that were previously only proved for nonuniform security (due to use of the non-constructive Min-Max Theorem).
We describe several applications, including a more modular and improved uniform version of Impagliazzo’s Hardcore Theorem (FOCS ’95), showing impossibility of constructing succinct non-interactive arguments (SNARGs) via black-box reductions under uniform hardness assumptions (using techniques from Gentry and Wichs (STOC ’11) for the nonuniform setting), and efficiently simulating high entropy distributions within any sufficiently nice convex set (extending a result of Trevisan, Tulsiani and Vadhan (CCC ’09)).Engineering and Applied Science
Quantum to Classical Randomness Extractors
The goal of randomness extraction is to distill (almost) perfect randomness
from a weak source of randomness. When the source yields a classical string X,
many extractor constructions are known. Yet, when considering a physical
randomness source, X is itself ultimately the result of a measurement on an
underlying quantum system. When characterizing the power of a source to supply
randomness it is hence a natural question to ask, how much classical randomness
we can extract from a quantum system. To tackle this question we here take on
the study of quantum-to-classical randomness extractors (QC-extractors). We
provide constructions of QC-extractors based on measurements in a full set of
mutually unbiased bases (MUBs), and certain single qubit measurements. As the
first application, we show that any QC-extractor gives rise to entropic
uncertainty relations with respect to quantum side information. Such relations
were previously only known for two measurements. As the second application, we
resolve the central open question in the noisy-storage model [Wehner et al.,
PRL 100, 220502 (2008)] by linking security to the quantum capacity of the
adversary's storage device.Comment: 6+31 pages, 2 tables, 1 figure, v2: improved converse parameters,
typos corrected, new discussion, v3: new reference
Entanglement sampling and applications
A natural measure for the amount of quantum information that a physical
system E holds about another system A = A_1,...,A_n is given by the min-entropy
Hmin(A|E). Specifically, the min-entropy measures the amount of entanglement
between E and A, and is the relevant measure when analyzing a wide variety of
problems ranging from randomness extraction in quantum cryptography, decoupling
used in channel coding, to physical processes such as thermalization or the
thermodynamic work cost (or gain) of erasing a quantum system. As such, it is a
central question to determine the behaviour of the min-entropy after some
process M is applied to the system A. Here we introduce a new generic tool
relating the resulting min-entropy to the original one, and apply it to several
settings of interest, including sampling of subsystems and measuring in a
randomly chosen basis. The sampling results lead to new upper bounds on quantum
random access codes, and imply the existence of "local decouplers". The results
on random measurements yield new high-order entropic uncertainty relations with
which we prove the optimality of cryptographic schemes in the bounded quantum
storage model.Comment: v3: fixed some typos, v2: fixed minor issue with the definition of
entropy and improved presentatio
Simulating Auxiliary Inputs, Revisited
For any pair of correlated random variables we can think of as a
randomized function of . Provided that is short, one can make this
function computationally efficient by allowing it to be only approximately
correct. In folklore this problem is known as \emph{simulating auxiliary
inputs}. This idea of simulating auxiliary information turns out to be a
powerful tool in computer science, finding applications in complexity theory,
cryptography, pseudorandomness and zero-knowledge. In this paper we revisit
this problem, achieving the following results:
\begin{enumerate}[(a)] We discuss and compare efficiency of known results,
finding the flaw in the best known bound claimed in the TCC'14 paper "How to
Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing
the simulator. Our technique essentially fixes the flaw. This boosting proof is
of independent interest, as it shows how to handle "negative mass" issues when
constructing probability measures in descent algorithms. Our bounds are much
better than bounds known so far. To make the simulator
-indistinguishable we need the complexity in time/circuit size, which is better by a
factor compared to previous bounds. In particular, with our
technique we (finally) get meaningful provable security for the EUROCRYPT'09
leakage-resilient stream cipher instantiated with a standard 256-bit block
cipher, like .Comment: Some typos present in the previous version have been correcte
Finite-Block-Length Analysis in Classical and Quantum Information Theory
Coding technology is used in several information processing tasks. In
particular, when noise during transmission disturbs communications, coding
technology is employed to protect the information. However, there are two types
of coding technology: coding in classical information theory and coding in
quantum information theory. Although the physical media used to transmit
information ultimately obey quantum mechanics, we need to choose the type of
coding depending on the kind of information device, classical or quantum, that
is being used. In both branches of information theory, there are many elegant
theoretical results under the ideal assumption that an infinitely large system
is available. In a realistic situation, we need to account for finite size
effects. The present paper reviews finite size effects in classical and quantum
information theory with respect to various topics, including applied aspects
Leftover Hashing Against Quantum Side Information
The Leftover Hash Lemma states that the output of a two-universal hash
function applied to an input with sufficiently high entropy is almost uniformly
random. In its standard formulation, the lemma refers to a notion of randomness
that is (usually implicitly) defined with respect to classical side
information. Here, we prove a (strictly) more general version of the Leftover
Hash Lemma that is valid even if side information is represented by the state
of a quantum system. Furthermore, our result applies to arbitrary delta-almost
two-universal families of hash functions. The generalized Leftover Hash Lemma
has applications in cryptography, e.g., for key agreement in the presence of an
adversary who is not restricted to classical information processing
Entropy accumulation
We ask the question whether entropy accumulates, in the sense that the
operationally relevant total uncertainty about an -partite system corresponds to the sum of the entropies of its parts . The
Asymptotic Equipartition Property implies that this is indeed the case to first
order in , under the assumption that the parts are identical and
independent of each other. Here we show that entropy accumulation occurs more
generally, i.e., without an independence assumption, provided one quantifies
the uncertainty about the individual systems by the von Neumann entropy
of suitably chosen conditional states. The analysis of a large system can hence
be reduced to the study of its parts. This is relevant for applications. In
device-independent cryptography, for instance, the approach yields essentially
optimal security bounds valid for general attacks, as shown by Arnon-Friedman
et al.Comment: 44 pages; expandable to 48 page
A New Approximate Min-Max Theorem with Applications in Cryptography
We propose a novel proof technique that can be applied to attack a broad
class of problems in computational complexity, when switching the order of
universal and existential quantifiers is helpful. Our approach combines the
standard min-max theorem and convex approximation techniques, offering
quantitative improvements over the standard way of using min-max theorems as
well as more concise and elegant proofs
- …