188 research outputs found
Recommended from our members
Extracting Randomness from Samplable Distributions
The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications.
Here, we consider the problem of deterministically converting a weak source of randomness into an almost uniform distribution. Previously, deterministic extraction procedures were known only for sources satisfying strong independence requirements. In this paper, we look at sources which are samplable, i.e., can be generated by an efficient sampling algorithm. We seek an efficient deterministic procedure that, given a sample from any samplable distribution of sufficiently large min-entropy, gives an almost uniformly distributed output. We explore the conditions under which such deterministic extractors exist.
We observe that no deterministic extractor exists if the sampler is allowed to use more computational resources than the extractor. On the other hand, if the extractor is allowed (polynomially) more resources than the sampler, we show that deterministic extraction becomes possible. This is true unconditionally in the nonuniform setting (i.e., when the extractor can be computed by a small circuit), and (necessarily) relies on complexity assumptions in the uniform setting.
One of our uniform constructions is as follows: assuming that there are problems in E=DTIME(2^{{O(n)}) that are not solvable by subexponential-size circuits with Sigma_6 gates, there is an efficient extractor that transforms any samplable distribution of length n and min-entropy (1-gamma)n into an output distribution of length (1-O(gamma))n, where gamma is any sufficiently small constant. The running time of the extractor is polynomial in n and the circuit complexity of the sampler. These extractors are based on a connection between deterministic extraction from samplable distributions and hardness against nondeterministic circuits, and on the use of nondeterminism to substantially speed up "list decoding" algorithms for error-correcting codes such as multivariate polynomial codes and Hadamard-like codes.Engineering and Applied Science
Efficiently Extracting Randomness from Imperfect Stochastic Processes
We study the problem of extracting a prescribed number of random bits by
reading the smallest possible number of symbols from non-ideal stochastic
processes. The related interval algorithm proposed by Han and Hoshi has
asymptotically optimal performance; however, it assumes that the distribution
of the input stochastic process is known. The motivation for our work is the
fact that, in practice, sources of randomness have inherent correlations and
are affected by measurement's noise. Namely, it is hard to obtain an accurate
estimation of the distribution. This challenge was addressed by the concepts of
seeded and seedless extractors that can handle general random sources with
unknown distributions. However, known seeded and seedless extractors provide
extraction efficiencies that are substantially smaller than Shannon's entropy
limit. Our main contribution is the design of extractors that have a variable
input-length and a fixed output length, are efficient in the consumption of
symbols from the source, are capable of generating random bits from general
stochastic processes and approach the information theoretic upper bound on
efficiency.Comment: 2 columns, 16 page
Almost-Uniform Sampling of Points on High-Dimensional Algebraic Varieties
We consider the problem of uniform sampling of points on an algebraic
variety. Specifically, we develop a randomized algorithm that, given a small
set of multivariate polynomials over a sufficiently large finite field,
produces a common zero of the polynomials almost uniformly at random. The
statistical distance between the output distribution of the algorithm and the
uniform distribution on the set of common zeros is polynomially small in the
field size, and the running time of the algorithm is polynomial in the
description of the polynomials and their degrees provided that the number of
the polynomials is a constant
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
Linear Transformations for Randomness Extraction
Information-efficient approaches for extracting randomness from imperfect
sources have been extensively studied, but simpler and faster ones are required
in the high-speed applications of random number generation. In this paper, we
focus on linear constructions, namely, applying linear transformation for
randomness extraction. We show that linear transformations based on sparse
random matrices are asymptotically optimal to extract randomness from
independent sources and bit-fixing sources, and they are efficient (may not be
optimal) to extract randomness from hidden Markov sources. Further study
demonstrates the flexibility of such constructions on source models as well as
their excellent information-preserving capabilities. Since linear
transformations based on sparse random matrices are computationally fast and
can be easy to implement using hardware like FPGAs, they are very attractive in
the high-speed applications. In addition, we explore explicit constructions of
transformation matrices. We show that the generator matrices of primitive BCH
codes are good choices, but linear transformations based on such matrices
require more computational time due to their high densities.Comment: 2 columns, 14 page
Randomness Condensers for Efficiently Samplable, Seed-Dependent Sources
We initiate a study of randomness condensers for sources that are efficiently samplable but may depend on the seed of the con- denser. That is, we seek functions Cond : {0, 1}n Ă{0, 1}d â {0, 1}m such that if we choose a random seed S â {0,1}d, and a source X = A(S) is generated by a randomized circuit A of size t such that X has min- entropy at least k given S, then Cond(X;S) should have min-entropy at least some kâČ given S. The distinction from the standard notion of ran- domness condensers is that the source X may be correlated with the seed S (but is restricted to be efficiently samplable). Randomness extractors of this type (corresponding to the special case where kâČ = m) have been implicitly studied in the past (by Trevisan and Vadhan, FOCS â00). We show that:
â Unlike extractors, we can have randomness condensers for samplable, seed-dependent sources whose computational complexity is smaller than the size t of the adversarial sampling algorithm A. Indeed, we show that sufficiently strong collision-resistant hash functions are seed-dependent condensers that produce outputs with min-entropy kâČ = m â O(log t), i.e. logarithmic entropy deficiency.
â Randomness condensers suffice for key derivation in many crypto- graphic applications: when an adversary has negligible success proba- bility (or negligible âsquared advantageâ [3]) for a uniformly random key, we can use instead a key generated by a condenser whose output has logarithmic entropy deficiency.
â Randomness condensers for seed-dependent samplable sources that are robust to side information generated by the sampling algorithm imply soundness of the Fiat-Shamir Heuristic when applied to any constant-round, public-coin interactive proof system.Engineering and Applied Science
On Pseudorandom Encodings
We initiate a study of pseudorandom encodings: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly and efficiently compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, âhoney encryptionâ and steganography.
The main question we ask is whether every efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a two-way relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multiparty computation for randomized functionalities and questions in the domain of steganography
On Foundations of Protecting Computations
Information technology systems have become indispensable to uphold our
way of living, our economy and our safety. Failure of these systems can have
devastating effects. Consequently, securing these systems against malicious
intentions deserves our utmost attention.
Cryptography provides the necessary foundations for that purpose. In
particular, it provides a set of building blocks which allow to secure larger
information systems. Furthermore, cryptography develops concepts and tech-
niques towards realizing these building blocks. The protection of computations
is one invaluable concept for cryptography which paves the way towards
realizing a multitude of cryptographic tools. In this thesis, we contribute to
this concept of protecting computations in several ways.
Protecting computations of probabilistic programs. An indis-
tinguishability obfuscator (IO) compiles (deterministic) code such that it
becomes provably unintelligible. This can be viewed as the ultimate way
to protect (deterministic) computations. Due to very recent research, such
obfuscators enjoy plausible candidate constructions.
In certain settings, however, it is necessary to protect probabilistic com-
putations. The only known construction of an obfuscator for probabilistic
programs is due to Canetti, Lin, Tessaro, and Vaikuntanathan, TCC, 2015 and
requires an indistinguishability obfuscator which satisfies extreme security
guarantees. We improve this construction and thereby reduce the require-
ments on the security of the underlying indistinguishability obfuscator.
(Agrikola, Couteau, and Hofheinz, PKC, 2020)
Protecting computations in cryptographic groups. To facilitate
the analysis of building blocks which are based on cryptographic groups,
these groups are often overidealized such that computations in the group
are protected from the outside. Using such overidealizations allows to prove
building blocks secure which are sometimes beyond the reach of standard
model techniques. However, these overidealizations are subject to certain
impossibility results. Recently, Fuchsbauer, Kiltz, and Loss, CRYPTO, 2018
introduced the algebraic group model (AGM) as a relaxation which is closer
to the standard model but in several aspects preserves the power of said
overidealizations. However, their model still suffers from implausibilities.
We develop a framework which allows to transport several security proofs
from the AGM into the standard model, thereby evading the above implausi-
bility results, and instantiate this framework using an indistinguishability
obfuscator.
(Agrikola, Hofheinz, and Kastner, EUROCRYPT, 2020)
Protecting computations using compression. Perfect compression
algorithms admit the property that the compressed distribution is truly
random leaving no room for any further compression. This property is
invaluable for several cryptographic applications such as âhoney encryptionâ
or password-authenticated key exchange. However, perfect compression
algorithms only exist for a very small number of distributions. We relax the
notion of compression and rigorously study the resulting notion which we
call âpseudorandom encodingsâ. As a result, we identify various surprising
connections between seemingly unrelated areas of cryptography. Particularly,
we derive novel results for adaptively secure multi-party computation which
allows for protecting computations in distributed settings. Furthermore, we
instantiate the weakest version of pseudorandom encodings which suffices
for adaptively secure multi-party computation using an indistinguishability
obfuscator.
(Agrikola, Couteau, Ishai, Jarecki, and Sahai, TCC, 2020
- âŠ