1,163 research outputs found
Trevisan's extractor in the presence of quantum side information
Randomness extraction involves the processing of purely classical information
and is therefore usually studied in the framework of classical probability
theory. However, such a classical treatment is generally too restrictive for
applications, where side information about the values taken by classical random
variables may be represented by the state of a quantum system. This is
particularly relevant in the context of cryptography, where an adversary may
make use of quantum devices. Here, we show that the well known construction
paradigm for extractors proposed by Trevisan is sound in the presence of
quantum side information.
We exploit the modularity of this paradigm to give several concrete extractor
constructions, which, e.g, extract all the conditional (smooth) min-entropy of
the source using a seed of length poly-logarithmic in the input, or only
require the seed to be weakly random.Comment: 20+10 pages; v2: extract more min-entropy, use weakly random seed;
v3: extended introduction, matches published version with sections somewhat
reordere
Samplers and Extractors for Unbounded Functions
Blasiok (SODA\u2718) recently introduced the notion of a subgaussian sampler, defined as an averaging sampler for approximating the mean of functions f from {0,1}^m to the real numbers such that f(U_m) has subgaussian tails, and asked for explicit constructions. In this work, we give the first explicit constructions of subgaussian samplers (and in fact averaging samplers for the broader class of subexponential functions) that match the best known constructions of averaging samplers for [0,1]-bounded functions in the regime of parameters where the approximation error epsilon and failure probability delta are subconstant. Our constructions are established via an extension of the standard notion of randomness extractor (Nisan and Zuckerman, JCSS\u2796) where the error is measured by an arbitrary divergence rather than total variation distance, and a generalization of Zuckerman\u27s equivalence (Random Struct. Alg.\u2797) between extractors and samplers. We believe that the framework we develop, and specifically the notion of an extractor for the Kullback-Leibler (KL) divergence, are of independent interest. In particular, KL-extractors are stronger than both standard extractors and subgaussian samplers, but we show that they exist with essentially the same parameters (constructively and non-constructively) as standard extractors
Physical Randomness Extractors: Generating Random Numbers with Minimal Assumptions
How to generate provably true randomness with minimal assumptions? This
question is important not only for the efficiency and the security of
information processing, but also for understanding how extremely unpredictable
events are possible in Nature. All current solutions require special structures
in the initial source of randomness, or a certain independence relation among
two or more sources. Both types of assumptions are impossible to test and
difficult to guarantee in practice. Here we show how this fundamental limit can
be circumvented by extractors that base security on the validity of physical
laws and extract randomness from untrusted quantum devices. In conjunction with
the recent work of Miller and Shi (arXiv:1402:0489), our physical randomness
extractor uses just a single and general weak source, produces an arbitrarily
long and near-uniform output, with a close-to-optimal error, secure against
all-powerful quantum adversaries, and tolerating a constant level of
implementation imprecision. The source necessarily needs to be unpredictable to
the devices, but otherwise can even be known to the adversary.
Our central technical contribution, the Equivalence Lemma, provides a general
principle for proving composition security of untrusted-device protocols. It
implies that unbounded randomness expansion can be achieved simply by
cross-feeding any two expansion protocols. In particular, such an unbounded
expansion can be made robust, which is known for the first time. Another
significant implication is, it enables the secure randomness generation and key
distribution using public randomness, such as that broadcast by NIST's
Randomness Beacon. Our protocol also provides a method for refuting local
hidden variable theories under a weak assumption on the available randomness
for choosing the measurement settings.Comment: A substantial re-writing of V2, especially on model definitions. An
abstract model of robustness is added and the robustness claim in V2 is made
rigorous. Focuses on quantum-security. A future update is planned to address
non-signaling securit
Non-Malleable Extractors and Codes, with their Many Tampered Extensions
Randomness extractors and error correcting codes are fundamental objects in
computer science. Recently, there have been several natural generalizations of
these objects, in the context and study of tamper resilient cryptography. These
are seeded non-malleable extractors, introduced in [DW09]; seedless
non-malleable extractors, introduced in [CG14b]; and non-malleable codes,
introduced in [DPW10].
However, explicit constructions of non-malleable extractors appear to be
hard, and the known constructions are far behind their non-tampered
counterparts.
In this paper we make progress towards solving the above problems. Our
contributions are as follows.
(1) We construct an explicit seeded non-malleable extractor for min-entropy
. This dramatically improves all previous results and gives a
simpler 2-round privacy amplification protocol with optimal entropy loss,
matching the best known result in [Li15b].
(2) We construct the first explicit non-malleable two-source extractor for
min-entropy , with output size and
error .
(3) We initiate the study of two natural generalizations of seedless
non-malleable extractors and non-malleable codes, where the sources or the
codeword may be tampered many times. We construct the first explicit
non-malleable two-source extractor with tampering degree up to
, which works for min-entropy , with
output size and error . We show that we can
efficiently sample uniformly from any pre-image. By the connection in [CG14b],
we also obtain the first explicit non-malleable codes with tampering degree
up to , relative rate , and error
.Comment: 50 pages; see paper for full abstrac
Two-Source Condensers with Low Error and Small Entropy Gap via Entropy-Resilient Functions
In their seminal work, Chattopadhyay and Zuckerman (STOC\u2716) constructed a two-source extractor with error epsilon for n-bit sources having min-entropy {polylog}(n/epsilon). Unfortunately, the construction\u27s running-time is {poly}(n/epsilon), which means that with polynomial-time constructions, only polynomially-small errors are possible. Our main result is a {poly}(n,log(1/epsilon))-time computable two-source condenser. For any k >= {polylog}(n/epsilon), our condenser transforms two independent (n,k)-sources to a distribution over m = k-O(log(1/epsilon)) bits that is epsilon-close to having min-entropy m - o(log(1/epsilon)). Hence, achieving entropy gap of o(log(1/epsilon)).
The bottleneck for obtaining low error in recent constructions of two-source extractors lies in the use of resilient functions. Informally, this is a function that receives input bits from r players with the property that the function\u27s output has small bias even if a bounded number of corrupted players feed adversarial inputs after seeing the inputs of the other players. The drawback of using resilient functions is that the error cannot be smaller than ln r/r. This, in return, forces the running time of the construction to be polynomial in 1/epsilon.
A key component in our construction is a variant of resilient functions which we call entropy-resilient functions. This variant can be seen as playing the above game for several rounds, each round outputting one bit. The goal of the corrupted players is to reduce, with as high probability as they can, the min-entropy accumulated throughout the rounds. We show that while the bias decreases only polynomially with the number of players in a one-round game, their success probability decreases exponentially in the entropy gap they are attempting to incur in a repeated game
- …