6,263 research outputs found
Using Games to Understand and Create Randomness
Massive growth of data and communication encryption has created growing need for non-predictable, random data, needed for encryption keys creation. Need for randomness grows (nearly) linearly with growth of encryption, but randomness is very important ingredient also e.g. in quickly growing industry of game programming. Computers are deterministic devices and cannot create random results, computer procedures can generate only pseudo-random (looking random) data. For true randomness is needed some outside information - time and placement of user's keystrokes, fluctuations of current, interrupt requests in computer processor etc. But even those sources can often not comply with requests from our increasingly randomness-hunger environment of ciphered communications and data. Growing need for randomness has created a market of randomness sources; new sources are proposed constantly. These sources differ in their properties (ease of access, size of required software etc.) and in ease of estimating their quality. However, there is an easily available good source for comparing quality of randomness and also creating new randomness - computer games. The growing affectionateness of users to play digital games makes this activity very attractive for comparing quality of randomness sources and using as a source of new randomness. In the following are analyzed possibilities for investigating and extracting randomness from digital gameplay and demonstrated some experiments with simple stateless games which allow to compare existing sources of (pseudo) randomness and generate new randomness, which can be used e.g. to create cyphering keys in mobile and Internet of Things devices.publishedVersionPeer reviewe
Random bits, true and unbiased, from atmospheric turbulence
Random numbers represent a fundamental ingredient for numerical simulation,
games, informa- tion science and secure communication. Algorithmic and
deterministic generators are affected by insufficient information entropy. On
the other hand, suitable physical processes manifest intrinsic unpredictability
that may be exploited for generating genuine random numbers with an entropy
reaching the ideal limit. In this work, we present a method to extract genuine
random bits by using the atmospheric turbulence: by sending a laser beam along
a 143Km free-space link, we took advantage of the chaotic behavior of air
refractive index in the optical propagation. Random numbers are then obtained
by converting in digital units the aberrations and distortions of the received
laser wave-front. The generated numbers, obtained without any post-processing,
pass the most selective randomness tests. The core of our extracting algorithm
can be easily generalized for other physical processes
When Can Limited Randomness Be Used in Repeated Games?
The central result of classical game theory states that every finite normal
form game has a Nash equilibrium, provided that players are allowed to use
randomized (mixed) strategies. However, in practice, humans are known to be bad
at generating random-like sequences, and true random bits may be unavailable.
Even if the players have access to enough random bits for a single instance of
the game their randomness might be insufficient if the game is played many
times.
In this work, we ask whether randomness is necessary for equilibria to exist
in finitely repeated games. We show that for a large class of games containing
arbitrary two-player zero-sum games, approximate Nash equilibria of the
-stage repeated version of the game exist if and only if both players have
random bits. In contrast, we show that there exists a class of
games for which no equilibrium exists in pure strategies, yet the -stage
repeated version of the game has an exact Nash equilibrium in which each player
uses only a constant number of random bits.
When the players are assumed to be computationally bounded, if cryptographic
pseudorandom generators (or, equivalently, one-way functions) exist, then the
players can base their strategies on "random-like" sequences derived from only
a small number of truly random bits. We show that, in contrast, in repeated
two-player zero-sum games, if pseudorandom generators \emph{do not} exist, then
random bits remain necessary for equilibria to exist
Resettable Zero Knowledge in the Bare Public-Key Model under Standard Assumption
In this paper we resolve an open problem regarding resettable zero knowledge
in the bare public-key (BPK for short) model: Does there exist constant round
resettable zero knowledge argument with concurrent soundness for
in BPK model without assuming \emph{sub-exponential hardness}? We give a
positive answer to this question by presenting such a protocol for any language
in in the bare public-key model assuming only
collision-resistant hash functions against \emph{polynomial-time} adversaries.Comment: 19 pag
Physical Randomness Extractors: Generating Random Numbers with Minimal Assumptions
How to generate provably true randomness with minimal assumptions? This
question is important not only for the efficiency and the security of
information processing, but also for understanding how extremely unpredictable
events are possible in Nature. All current solutions require special structures
in the initial source of randomness, or a certain independence relation among
two or more sources. Both types of assumptions are impossible to test and
difficult to guarantee in practice. Here we show how this fundamental limit can
be circumvented by extractors that base security on the validity of physical
laws and extract randomness from untrusted quantum devices. In conjunction with
the recent work of Miller and Shi (arXiv:1402:0489), our physical randomness
extractor uses just a single and general weak source, produces an arbitrarily
long and near-uniform output, with a close-to-optimal error, secure against
all-powerful quantum adversaries, and tolerating a constant level of
implementation imprecision. The source necessarily needs to be unpredictable to
the devices, but otherwise can even be known to the adversary.
Our central technical contribution, the Equivalence Lemma, provides a general
principle for proving composition security of untrusted-device protocols. It
implies that unbounded randomness expansion can be achieved simply by
cross-feeding any two expansion protocols. In particular, such an unbounded
expansion can be made robust, which is known for the first time. Another
significant implication is, it enables the secure randomness generation and key
distribution using public randomness, such as that broadcast by NIST's
Randomness Beacon. Our protocol also provides a method for refuting local
hidden variable theories under a weak assumption on the available randomness
for choosing the measurement settings.Comment: A substantial re-writing of V2, especially on model definitions. An
abstract model of robustness is added and the robustness claim in V2 is made
rigorous. Focuses on quantum-security. A future update is planned to address
non-signaling securit
- âŠ