38 research outputs found
Extracting All the Randomness and Reducing the Error in Trevisan's Extractors
We give explicit constructions of extractors which work for a source of any min-entropy on strings of length n. These extractors can extract any constant fraction of the min-entropy using O(log2n) additional random bits, and can extract all the min-entropy using O(log3n) additional random bits. Both of these constructions use fewer truly random bits than any previous construction which works for all min-entropies and extracts a constant fraction of the min-entropy. We then improve our second construction and show that we can reduce the entropy loss to 2log(1/epsilon)+O(1) bits, while still using O(log3n) truly random bits (where entropy loss is defined as [(source min-entropy)+ (# truly random bits used)- (# output bits)], and epsilon is the statistical difference from uniform achieved). This entropy loss is optimal up to a constant additive term. Our extractors are obtained by observing that a weaker notion of "combinatorial design" suffices for the Nisan-Wigderson pseudorandom generator, which underlies the recent extractor of Trevisan. We give near-optimal constructions of such "weak designs" which achieve much better parameters than possible with the notion of designs used by Nisan-Wigderson and Trevisan. We also show how to improve our constructions (and Trevisan's construction) when the required statistical difference epsilon from the uniform distribution is relatively small. This improvement is obtained by using multilinear error-correcting codes over finite fields, rather than the arbitrary error-correcting codes used by Trevisan.Engineering and Applied Science
Recommended from our members
Distributed computing and cryptography with general weak random sources
The use of randomness in computer science is ubiquitous. Randomized protocols have turned out to be much more efficient than their deterministic counterparts. In addition, many problems in distributed computing and cryptography are impossible to solve without randomness. However, these applications typically require uniform random bits, while in practice almost all natural random phenomena are biased. Moreover, even originally uniform random bits can be damaged if an adversary learns some partial information about these bits. In this thesis, we study how to run randomized protocols in distributed computing and cryptography with imperfect randomness. We use the most general model for imperfect randomness where the weak random source is only required to have a certain amount of min-entropy. One important tool here is the randomness extractor. A randomness extractor is a function that takes as input one or more weak random sources, and outputs a distribution that is close to uniform in statistical distance. Randomness extractors are interesting in their own right and are closely related to many other problems in computer science. Giving efficient constructions of randomness extractors with optimal parameters is one of the major open problems in the area of pseudorandomness. We construct network extractor protocols that extract private random bits for parties in a communication network, assuming that they each start with an independent weak random source, and some parties are corrupted by an adversary who sees all communications in the network. These protocols imply fault-tolerant distributed computing protocols and secure multi-party computation protocols where only imperfect randomness is available. The probabilistic method shows that there exists an extractor for two independent sources with logarithmic min-entropy, while known constructions are far from achieving these parameters. In this thesis we construct extractors for two independent sources with any linear min-entropy, based on a computational assumption. We also construct the best known extractors for three independent sources and affine sources. Finally we study the problem of privacy amplification. In this model, two parties share a private weak random source and they wish to agree on a private uniform random string through communications in a channel controlled by an adversary, who has unlimited computational power and can change the messages in arbitrary ways. All previous results assume that the two parties have local uniform random bits. We show that this problem can be solved even if the two parties only have local weak random sources. We also improve previous results in various aspects by constructing the first explicit non-malleable extractor and giving protocols based on this extractor.Computer Science
Distributed computing with imperfect randomness
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 41-43).Randomness is a critical resource in many computational scenarios, enabling solutions where deterministic ones are elusive or even provably impossible. However, the randomized solutions to these tasks assume access to a pure source of unbiased, independent coins. Physical sources of randomness, on the other hand, are rarely unbiased and independent although they do seem to exhibit somewhat imperfect randomness. This gap in modeling questions the relevance of current randomized solutions to computational tasks. Indeed, there has been substantial investigation of this issue in complexity theory in the context of the applications to efficient algorithms and cryptography. This work seeks to determine whether imperfect randomness, modeled appropriately, is "good enough" for distributed algorithms. Namely, can we do with imperfect randomness all that we can do with perfect randomness, and with comparable efficiency ? We answer this question in the affirmative, for the problem of Byzantine agreement. We construct protocols for Byzantine agreement in a variety of scenarios (synchronous or asynchronous networks, with or without private channels), in which the players have imperfect randomness. Our solutions are essentially as efficient as the best known randomized Byzantine agreement protocols, which traditionally assume that all the players have access to perfect randomness.by Vinod Vaikuntanathan.S.M
Game-Theoretic Fairness Meets Multi-Party Protocols: The Case of Leader Election
Suppose that players
want to elect a random leader and they communicate by posting
messages to a common broadcast channel.
This problem is called leader election, and it is
fundamental to the distributed systems and cryptography literature.
Recently, it has attracted renewed interests
due to its promised applications in decentralized environments.
In a game theoretically fair leader election protocol, roughly speaking,
we want that even majority coalitions
cannot increase its own chance of getting
elected, nor hurt the chance of any honest individual.
The folklore tournament-tree
protocol, which completes in logarithmically many rounds,
can easily be shown to satisfy game theoretic security. To the best of our knowledge,
no sub-logarithmic round protocol was known in the setting that we consider.
We show that
by adopting an appropriate notion of approximate game-theoretic fairness,
and under standard cryptographic assumption,
we can achieve
-fairness in rounds for ,
where denotes the number of players. In particular, this means that we can approximately match the fairness of the tournament tree protocol using as few as rounds.
We also prove a lower bound showing that
logarithmically many rounds is necessary if we restrict ourselves
to ``perfect\u27\u27 game-theoretic fairness
and protocols that are
``very similar in structure\u27\u27 to the tournament-tree protocol.
Although leader election is a well-studied problem in other contexts in distributed
computing,
our work is the first exploration of the round complexity
of {\it game-theoretically
fair} leader election in the presence of a possibly majority coalition.
As a by-product of our exploration,
we suggest a new, approximate game-theoretic fairness
notion, called ``approximate sequential fairness\u27\u27,
which provides a more desirable solution concept than some previously
studied approximate fairness notions
Applications of Derandomization Theory in Coding
Randomized techniques play a fundamental role in theoretical computer science
and discrete mathematics, in particular for the design of efficient algorithms
and construction of combinatorial objects. The basic goal in derandomization
theory is to eliminate or reduce the need for randomness in such randomized
constructions. In this thesis, we explore some applications of the fundamental
notions in derandomization theory to problems outside the core of theoretical
computer science, and in particular, certain problems related to coding theory.
First, we consider the wiretap channel problem which involves a communication
system in which an intruder can eavesdrop a limited portion of the
transmissions, and construct efficient and information-theoretically optimal
communication protocols for this model. Then we consider the combinatorial
group testing problem. In this classical problem, one aims to determine a set
of defective items within a large population by asking a number of queries,
where each query reveals whether a defective item is present within a specified
group of items. We use randomness condensers to explicitly construct optimal,
or nearly optimal, group testing schemes for a setting where the query outcomes
can be highly unreliable, as well as the threshold model where a query returns
positive if the number of defectives pass a certain threshold. Finally, we
design ensembles of error-correcting codes that achieve the
information-theoretic capacity of a large class of communication channels, and
then use the obtained ensembles for construction of explicit capacity achieving
codes.
[This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi
Super-Linear Time-Memory Trade-Offs for Symmetric Encryption
We build symmetric encryption schemes from a pseudorandom
function/permutation with domain size which have very high
security -- in terms of the amount of messages they can securely
encrypt -- assuming the adversary has bits of memory. We aim
to minimize the number of calls we make to the underlying
primitive to achieve a certain , or equivalently, to maximize the
achievable for a given . We target in
particular , in contrast to recent works (Jaeger and
Tessaro, EUROCRYPT \u2719; Dinur, EUROCRYPT \u2720) which aim to beat the
birthday barrier with one call when .
Our first result gives new and explicit bounds for the
Sample-then-Extract paradigm by Tessaro and Thiruvengadam (TCC
\u2718). We show instantiations for which .
If , Thiruvengadam and Tessaro\u27s weaker bounds
only guarantee when . In contrast, here,
we show this is true already for .
We also consider a scheme by Bellare, Goldreich and Krawczyk (CRYPTO
\u2799) which evaluates the primitive on independent random
strings, and masks the message with the XOR of the outputs. Here, we
show , using new combinatorial bounds
on the list-decodability of XOR codes which are of independent
interest. We also study best-possible attacks against this
construction
Recommended from our members
Explicit two-source extractors and more
In this thesis we study the problem of extracting almost truly random bits from imperfect sources of randomness. This is motivated by the wide use of randomness in computer science, and the fact that most accessible sources of randomness generate correlated bits, and at best contain some amount of entropy. We follow Chor and Goldreich [CG88] and Zuckerman [Z90], and model weak sources using min-entropy, where an (n,k)-source X is a distribution on n bits and takes any string x with probability at most 2^-k. It is known that it is impossible to extract random bits from a single (n,k)-source, and Chor and Goldreich [CG88] raised the question of extracting randomness from two such independent (n,k)-sources. Existentially, such 2-source randomness extractors exist for min-entropy k >=log n + O(1), but the best known construction prior to work in this thesis requires min-entropy k >=0.499 n [B2]. One of the main contributions of this thesis is an explicit 2-source extractor for min-entropy log^C n, for some constant C. Other results in this thesis include improved ways of extracting random bits from various other sources of randomness, as well as stronger notions of randomness extraction. Our results have applications in privacy amplification [BBR88,Mau92,BBCM95], which is a classical problem in information cryptography, and give protocols that achieve almost optimal parameters. Other applications include explicit constructions of non-malleable codes, which is a relaxation of the notion of error-detection codes and have applications in tamper-resilient cryptography [DPW10].Computer Science
Fair Leader Election for Rational Agents in Asynchronous Rings and Networks
We study a game theoretic model where a coalition of processors might collude
to bias the outcome of the protocol, where we assume that the processors always
prefer any legitimate outcome over a non-legitimate one. We show that the
problems of Fair Leader Election and Fair Coin Toss are equivalent, and focus
on Fair Leader Election.
Our main focus is on a directed asynchronous ring of processors, where we
investigate the protocol proposed by Abraham et al.
\cite{abraham2013distributed} and studied in Afek et al.
\cite{afek2014distributed}. We show that in general the protocol is resilient
only to sub-linear size coalitions. Specifically, we show that
randomly located processors or
adversarially located processors can force any outcome. We complement this by
showing that the protocol is resilient to any adversarial coalition of size
.
We propose a modification to the protocol, and show that it is resilient to
every coalition of size , by exhibiting both an attack and a
resilience result. For every , we define a family of graphs
that can be simulated by trees where each node in the tree
simulates at most processors. We show that for every graph in
, there is no fair leader election protocol that is
resilient to coalitions of size . Our result generalizes a previous result
of Abraham et al. \cite{abraham2013distributed} that states that for every
graph, there is no fair leader election protocol which is resilient to
coalitions of size .Comment: 48 pages, PODC 201
Foundations of decentralised privacy
Distributed ledgers, and specifically blockchains, have been an immensely popular investment in the past few years. The heart of their popularity is due to
their novel approach toward financial assets: They replace the need for central,
trusted institutions such as banks with cryptography, ensuring no one entity has
authority over the system. In the light of record distrust in many established
institutions, this is attractive both as a method to combat institutional control
and to demonstrate transparency. What better way to manage distrust than to
embrace it? While distributed ledgers have achieved great things in removing
the need to trust institutions, most notably the creation of fully decentralised
assets, their practice falls short of the idealistic goals often seen in the field.
One of their greatest shortcomings lies in a fundamental conflict with privacy. Distributed ledgers and surrounding technologies rely heavily on the
transparent replication of data, a practice which makes keeping anything hidden very difficult. This thesis makes use of the powerful cryptography of succinct non-interactive zero-knowledge proofs to provide a foundation for re-establishing privacy in the decentralised setting. It discusses the security assumptions and requirements of succinct zero-knowledge proofs atlength, establishing a new framework for handling security proofs about them, and reducing
the setup required to that already present in commonly used distributed ledgers.
It further demonstrates the possibility of privacy-preserving proof-of-stake,
removing the need for costly proofs-of-work for a privacy-focused distributed
ledger. Finally, it lays out a solid foundation for a smart contract system supporting privacy – putting into the hands of contract authors the tools necessary
to innovate and introduce new privacy features