7 research outputs found
Some comments on C. S. Wallace's random number generators
We outline some of Chris Wallace's contributions to pseudo-random number
generation. In particular, we consider his idea for generating normally
distributed variates without relying on a source of uniform random numbers, and
compare it with more conventional methods for generating normal random numbers.
Implementations of Wallace's idea can be very fast (approximately as fast as
good uniform generators). We discuss the statistical quality of the output, and
mention how certain pitfalls can be avoided.Comment: 13 pages. For further information, see
http://wwwmaths.anu.edu.au/~brent/pub/pub213.htm
Time-Space Tradeoffs for Distinguishing Distributions and Applications to Security of Goldreich's PRG
In this work, we establish lower-bounds against memory bounded algorithms for
distinguishing between natural pairs of related distributions from samples that
arrive in a streaming setting.
In our first result, we show that any algorithm that distinguishes between
uniform distribution on and uniform distribution on an
-dimensional linear subspace of with non-negligible advantage
needs samples or memory.
Our second result applies to distinguishing outputs of Goldreich's local
pseudorandom generator from the uniform distribution on the output domain.
Specifically, Goldreich's pseudorandom generator fixes a predicate
and a collection of subsets of size . For any seed , it
outputs where is the
projection of to the coordinates in . We prove that whenever is
-resilient (all non-zero Fourier coefficients of are of degree
or higher), then no algorithm, with memory, can distinguish the
output of from the uniform distribution on with a large inverse
polynomial advantage, for stretch (barring some
restrictions on ). The lower bound holds in the streaming model where at
each time step , is a randomly chosen (ordered) subset of
size and the distinguisher sees either or a uniformly random
bit along with .
Our proof builds on the recently developed machinery for proving time-space
trade-offs (Raz 2016 and follow-ups) for search/learning problems.Comment: 35 page
Information theoretically secure communication in the limited storage space model
Abstract. We provide a simple secret-key two-party secure communication scheme, which is provably information-theoretically secure in the limited-storage-space model. The limited-storage-space model postulates an eavesdropper who can execute arbitrarily complex computations, and is only limited in the total amount of storage space (not computation space) available to him. The bound on the storage space can be arbitrarily large (e.g. terabytes), as long as it is fixed. Given this bound, the protocol guarantees that the probability of the eavesdropper of gaining any information on the message is exponentially small. The proof of our main results utilizes a novel combination of linear algebra and Kolmogorov complexity considerations.
The paradigm of partial erasures
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 137-145).This thesis is a study of erasures in cryptographic protocols. Erasing old data and keys is an important capability of honest parties in cryptographic protocols. It is useful in many settings, including proactive security in the presence of a mobile adversary, adaptive security in the presence of an adaptive adversary, forward security, and intrusion resilience. Some of these settings, such as achieving proactive security, is provably impossible without some form of erasures. Other settings, such as designing protocols that are secure against adaptive adversaries, are much simpler to achieve when erasures are allowed. Protocols for all these contexts typically assume the ability to perfectly erase information. Unfortunately, as amply demonstrated in the systems literature, perfect erasures are hard to implement in practice. We propose a model of imperfect or partial erasures where erasure instructions are only partially effective and leave almost all the data intact, thus giving the honest parties only a limited capability to dispose old data. Nonetheless, we show how to design protocols for all of the above settings (including proactive security, adaptive security, forward security, and intrusion resilience) for which this weak form of erasures suffices. We do not have to invent entirely new protocols, but rather show how to automatically modify protocols relying on perfect erasures into ones for which partial erasures suffices. Stated most generally, we provide a compiler that transforms any protocol relying on perfect erasures for security into one with the same functionality that remains secure even if the erasures are only partial. The key idea is a new redundant representation of secret data which can still be computed on, and yet is rendered useless when partially erased. We prove that any such compiler must incur a cost in additional storage, and that our compiler is near optimal in terms of its storage overhead. We also give computationally more efficient compilers for a number of special cases: (1) when all the computations on secrets can be done in constant parallel time (NCβ°); (2) for a class of proactive secret sharing protocols where we leave the protocol intact except for changing the representation of the shares of the secret and the instructions that modify the shares (to correspondingly modify the new representation instead).by Dah-Yoh Lim.Ph.D