170 research outputs found
When Can Limited Randomness Be Used in Repeated Games?
The central result of classical game theory states that every finite normal
form game has a Nash equilibrium, provided that players are allowed to use
randomized (mixed) strategies. However, in practice, humans are known to be bad
at generating random-like sequences, and true random bits may be unavailable.
Even if the players have access to enough random bits for a single instance of
the game their randomness might be insufficient if the game is played many
times.
In this work, we ask whether randomness is necessary for equilibria to exist
in finitely repeated games. We show that for a large class of games containing
arbitrary two-player zero-sum games, approximate Nash equilibria of the
-stage repeated version of the game exist if and only if both players have
random bits. In contrast, we show that there exists a class of
games for which no equilibrium exists in pure strategies, yet the -stage
repeated version of the game has an exact Nash equilibrium in which each player
uses only a constant number of random bits.
When the players are assumed to be computationally bounded, if cryptographic
pseudorandom generators (or, equivalently, one-way functions) exist, then the
players can base their strategies on "random-like" sequences derived from only
a small number of truly random bits. We show that, in contrast, in repeated
two-player zero-sum games, if pseudorandom generators \emph{do not} exist, then
random bits remain necessary for equilibria to exist
On the Approximability of Digraph Ordering
Given an n-vertex digraph D = (V, A) the Max-k-Ordering problem is to compute
a labeling maximizing the number of forward edges, i.e.
edges (u,v) such that (u) < (v). For different values of k, this
reduces to Maximum Acyclic Subgraph (k=n), and Max-Dicut (k=2). This work
studies the approximability of Max-k-Ordering and its generalizations,
motivated by their applications to job scheduling with soft precedence
constraints. We give an LP rounding based 2-approximation algorithm for
Max-k-Ordering for any k={2,..., n}, improving on the known
2k/(k-1)-approximation obtained via random assignment. The tightness of this
rounding is shown by proving that for any k={2,..., n} and constant
, Max-k-Ordering has an LP integrality gap of 2 -
for rounds of the
Sherali-Adams hierarchy.
A further generalization of Max-k-Ordering is the restricted maximum acyclic
subgraph problem or RMAS, where each vertex v has a finite set of allowable
labels . We prove an LP rounding based
approximation for it, improving on the
approximation recently given by Grandoni et al.
(Information Processing Letters, Vol. 115(2), Pages 182-185, 2015). In fact,
our approximation algorithm also works for a general version where the
objective counts the edges which go forward by at least a positive offset
specific to each edge.
The minimization formulation of digraph ordering is DAG edge deletion or
DED(k), which requires deleting the minimum number of edges from an n-vertex
directed acyclic graph (DAG) to remove all paths of length k. We show that
both, the LP relaxation and a local ratio approach for DED(k) yield
k-approximation for any .Comment: 21 pages, Conference version to appear in ESA 201
On the Complexity of Computing Two Nonlinearity Measures
We study the computational complexity of two Boolean nonlinearity measures:
the nonlinearity and the multiplicative complexity. We show that if one-way
functions exist, no algorithm can compute the multiplicative complexity in time
given the truth table of length , in fact under the same
assumption it is impossible to approximate the multiplicative complexity within
a factor of . When given a circuit, the problem of
determining the multiplicative complexity is in the second level of the
polynomial hierarchy. For nonlinearity, we show that it is #P hard to compute
given a function represented by a circuit
Verifying proofs in constant depth
In this paper we initiate the study of proof systems where verification of proofs proceeds by NC circuits. We investigate the question which languages admit proof systems in this very restricted model. Formulated alternatively, we ask which languages can be enumerated by NC functions. Our results show that the answer to this problem is not determined by the complexity of the language. On the one hand, we construct NC proof systems for a variety of languages ranging from regular to NP-complete. On the other hand, we show by combinatorial methods that even easy regular languages such as Exact-OR do not admit NC proof systems. We also present a general construction of proof systems for regular languages with strongly connected NFA's
Unifying computational entropies via Kullback-Leibler divergence
We introduce hardness in relative entropy, a new notion of hardness for
search problems which on the one hand is satisfied by all one-way functions and
on the other hand implies both next-block pseudoentropy and inaccessible
entropy, two forms of computational entropy used in recent constructions of
pseudorandom generators and statistically hiding commitment schemes,
respectively. Thus, hardness in relative entropy unifies the latter two notions
of computational entropy and sheds light on the apparent "duality" between
them. Additionally, it yields a more modular and illuminating proof that
one-way functions imply next-block inaccessible entropy, similar in structure
to the proof that one-way functions imply next-block pseudoentropy (Vadhan and
Zheng, STOC '12)
Predictable arguments of knowledge
We initiate a formal investigation on the power of predictability for argument of knowledge systems for NP. Specifically, we consider private-coin argument systems where the answer of the prover can be predicted, given the private randomness of the verifier; we call such protocols Predictable Arguments of Knowledge (PAoK).
Our study encompasses a full characterization of PAoK, showing that such arguments can be made extremely laconic, with the prover sending a single bit, and assumed to have only one round (i.e., two messages) of communication without loss of generality.
We additionally explore PAoK satisfying additional properties (including zero-knowledge and the possibility of re-using the same challenge across multiple executions with the prover), present several constructions of PAoK relying on different cryptographic tools, and discuss applications to cryptography
Spatially resolved spectroscopy of monolayer graphene on SiO2
We have carried out scanning tunneling spectroscopy measurements on
exfoliated monolayer graphene on SiO to probe the correlation between its
electronic and structural properties. Maps of the local density of states are
characterized by electron and hole puddles that arise due to long range
intravalley scattering from intrinsic ripples in graphene and random charged
impurities. At low energy, we observe short range intervalley scattering which
we attribute to lattice defects. Our results demonstrate that the electronic
properties of graphene are influenced by intrinsic ripples, defects and the
underlying SiO substrate.Comment: 6 pages, 7 figures, extended versio
On the Maximum Crossing Number
Research about crossings is typically about minimization. In this paper, we
consider \emph{maximizing} the number of crossings over all possible ways to
draw a given graph in the plane. Alpert et al. [Electron. J. Combin., 2009]
conjectured that any graph has a \emph{convex} straight-line drawing, e.g., a
drawing with vertices in convex position, that maximizes the number of edge
crossings. We disprove this conjecture by constructing a planar graph on twelve
vertices that allows a non-convex drawing with more crossings than any convex
one. Bald et al. [Proc. COCOON, 2016] showed that it is NP-hard to compute the
maximum number of crossings of a geometric graph and that the weighted
geometric case is NP-hard to approximate. We strengthen these results by
showing hardness of approximation even for the unweighted geometric case and
prove that the unweighted topological case is NP-hard.Comment: 16 pages, 5 figure
Approximation hardness of Travelling Salesman via weighted amplifiers
The expander graph constructions and their variants are the main tool used in gap preserving reductions to prove approximation lower bounds of combinatorial optimisation problems. In this paper we introduce the weighted amplifiers and weighted low occurrence of Constraint Satisfaction problems as intermediate steps in the NP-hard gap reductions. Allowing the weights in intermediate problems is rather natural for the edge-weighted problems as Travelling Salesman or Steiner Tree. We demonstrate the technique for Travelling Salesman and use the parametrised weighted amplifiers in the gap reductions to allow more flexibility in fine-tuning their expanding parameters. The purpose of this paper is to point out effectiveness of these ideas, rather than to optimise the expanderâs parameters. Nevertheless, we show that already slight improvement of known expander values modestly improve the current best approximation hardness value for TSP from 123/122 ([9]) to 117/116 . This provides a new motivation for study of expanding properties of random graphs in order to improve approximation lower bounds of TSP and other edge-weighted optimisation problems
On Symmetric Encryption with Distinguishable Decryption Failures
We propose to relax the assumption that decryption failures are indistinguishable in security models for symmetric encryption. Our main purpose is to build models that better reflect the reality of cryptographic implementations, and to surface the security issues that arise from doing so. We systematically explore the consequences of this relaxation, with some surprising consequences for our understanding of this basic cryptographic primitive. Our results should be useful to practitioners who wish to build accurate models of their implementations and then analyse them. They should also be of value to more theoretical cryptographers proposing new encryption schemes, who, in an ideal world, would be compelled by this work to consider the possibility that their schemes might leak more than simple decryption failures
- âŠ