170 research outputs found

    When Can Limited Randomness Be Used in Repeated Games?

    Full text link
    The central result of classical game theory states that every finite normal form game has a Nash equilibrium, provided that players are allowed to use randomized (mixed) strategies. However, in practice, humans are known to be bad at generating random-like sequences, and true random bits may be unavailable. Even if the players have access to enough random bits for a single instance of the game their randomness might be insufficient if the game is played many times. In this work, we ask whether randomness is necessary for equilibria to exist in finitely repeated games. We show that for a large class of games containing arbitrary two-player zero-sum games, approximate Nash equilibria of the nn-stage repeated version of the game exist if and only if both players have Ω(n)\Omega(n) random bits. In contrast, we show that there exists a class of games for which no equilibrium exists in pure strategies, yet the nn-stage repeated version of the game has an exact Nash equilibrium in which each player uses only a constant number of random bits. When the players are assumed to be computationally bounded, if cryptographic pseudorandom generators (or, equivalently, one-way functions) exist, then the players can base their strategies on "random-like" sequences derived from only a small number of truly random bits. We show that, in contrast, in repeated two-player zero-sum games, if pseudorandom generators \emph{do not} exist, then Ω(n)\Omega(n) random bits remain necessary for equilibria to exist

    On the Approximability of Digraph Ordering

    Full text link
    Given an n-vertex digraph D = (V, A) the Max-k-Ordering problem is to compute a labeling ℓ:V→[k]\ell : V \to [k] maximizing the number of forward edges, i.e. edges (u,v) such that ℓ\ell(u) < ℓ\ell(v). For different values of k, this reduces to Maximum Acyclic Subgraph (k=n), and Max-Dicut (k=2). This work studies the approximability of Max-k-Ordering and its generalizations, motivated by their applications to job scheduling with soft precedence constraints. We give an LP rounding based 2-approximation algorithm for Max-k-Ordering for any k={2,..., n}, improving on the known 2k/(k-1)-approximation obtained via random assignment. The tightness of this rounding is shown by proving that for any k={2,..., n} and constant Δ>0\varepsilon > 0, Max-k-Ordering has an LP integrality gap of 2 - Δ\varepsilon for nΩ(1/log⁥log⁥k)n^{\Omega\left(1/\log\log k\right)} rounds of the Sherali-Adams hierarchy. A further generalization of Max-k-Ordering is the restricted maximum acyclic subgraph problem or RMAS, where each vertex v has a finite set of allowable labels Sv⊆Z+S_v \subseteq \mathbb{Z}^+. We prove an LP rounding based 42/(2+1)≈2.3444\sqrt{2}/(\sqrt{2}+1) \approx 2.344 approximation for it, improving on the 22≈2.8282\sqrt{2} \approx 2.828 approximation recently given by Grandoni et al. (Information Processing Letters, Vol. 115(2), Pages 182-185, 2015). In fact, our approximation algorithm also works for a general version where the objective counts the edges which go forward by at least a positive offset specific to each edge. The minimization formulation of digraph ordering is DAG edge deletion or DED(k), which requires deleting the minimum number of edges from an n-vertex directed acyclic graph (DAG) to remove all paths of length k. We show that both, the LP relaxation and a local ratio approach for DED(k) yield k-approximation for any k∈[n]k\in [n].Comment: 21 pages, Conference version to appear in ESA 201

    On the Complexity of Computing Two Nonlinearity Measures

    Full text link
    We study the computational complexity of two Boolean nonlinearity measures: the nonlinearity and the multiplicative complexity. We show that if one-way functions exist, no algorithm can compute the multiplicative complexity in time 2O(n)2^{O(n)} given the truth table of length 2n2^n, in fact under the same assumption it is impossible to approximate the multiplicative complexity within a factor of (2−ϔ)n/2(2-\epsilon)^{n/2}. When given a circuit, the problem of determining the multiplicative complexity is in the second level of the polynomial hierarchy. For nonlinearity, we show that it is #P hard to compute given a function represented by a circuit

    Verifying proofs in constant depth

    Get PDF
    In this paper we initiate the study of proof systems where verification of proofs proceeds by NC circuits. We investigate the question which languages admit proof systems in this very restricted model. Formulated alternatively, we ask which languages can be enumerated by NC functions. Our results show that the answer to this problem is not determined by the complexity of the language. On the one hand, we construct NC proof systems for a variety of languages ranging from regular to NP-complete. On the other hand, we show by combinatorial methods that even easy regular languages such as Exact-OR do not admit NC proof systems. We also present a general construction of proof systems for regular languages with strongly connected NFA's

    Unifying computational entropies via Kullback-Leibler divergence

    Get PDF
    We introduce hardness in relative entropy, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible entropy, two forms of computational entropy used in recent constructions of pseudorandom generators and statistically hiding commitment schemes, respectively. Thus, hardness in relative entropy unifies the latter two notions of computational entropy and sheds light on the apparent "duality" between them. Additionally, it yields a more modular and illuminating proof that one-way functions imply next-block inaccessible entropy, similar in structure to the proof that one-way functions imply next-block pseudoentropy (Vadhan and Zheng, STOC '12)

    Predictable arguments of knowledge

    Get PDF
    We initiate a formal investigation on the power of predictability for argument of knowledge systems for NP. Specifically, we consider private-coin argument systems where the answer of the prover can be predicted, given the private randomness of the verifier; we call such protocols Predictable Arguments of Knowledge (PAoK). Our study encompasses a full characterization of PAoK, showing that such arguments can be made extremely laconic, with the prover sending a single bit, and assumed to have only one round (i.e., two messages) of communication without loss of generality. We additionally explore PAoK satisfying additional properties (including zero-knowledge and the possibility of re-using the same challenge across multiple executions with the prover), present several constructions of PAoK relying on different cryptographic tools, and discuss applications to cryptography

    Spatially resolved spectroscopy of monolayer graphene on SiO2

    Full text link
    We have carried out scanning tunneling spectroscopy measurements on exfoliated monolayer graphene on SiO2_2 to probe the correlation between its electronic and structural properties. Maps of the local density of states are characterized by electron and hole puddles that arise due to long range intravalley scattering from intrinsic ripples in graphene and random charged impurities. At low energy, we observe short range intervalley scattering which we attribute to lattice defects. Our results demonstrate that the electronic properties of graphene are influenced by intrinsic ripples, defects and the underlying SiO2_2 substrate.Comment: 6 pages, 7 figures, extended versio

    On the Maximum Crossing Number

    Full text link
    Research about crossings is typically about minimization. In this paper, we consider \emph{maximizing} the number of crossings over all possible ways to draw a given graph in the plane. Alpert et al. [Electron. J. Combin., 2009] conjectured that any graph has a \emph{convex} straight-line drawing, e.g., a drawing with vertices in convex position, that maximizes the number of edge crossings. We disprove this conjecture by constructing a planar graph on twelve vertices that allows a non-convex drawing with more crossings than any convex one. Bald et al. [Proc. COCOON, 2016] showed that it is NP-hard to compute the maximum number of crossings of a geometric graph and that the weighted geometric case is NP-hard to approximate. We strengthen these results by showing hardness of approximation even for the unweighted geometric case and prove that the unweighted topological case is NP-hard.Comment: 16 pages, 5 figure

    Approximation hardness of Travelling Salesman via weighted amplifiers

    Get PDF
    The expander graph constructions and their variants are the main tool used in gap preserving reductions to prove approximation lower bounds of combinatorial optimisation problems. In this paper we introduce the weighted amplifiers and weighted low occurrence of Constraint Satisfaction problems as intermediate steps in the NP-hard gap reductions. Allowing the weights in intermediate problems is rather natural for the edge-weighted problems as Travelling Salesman or Steiner Tree. We demonstrate the technique for Travelling Salesman and use the parametrised weighted amplifiers in the gap reductions to allow more flexibility in fine-tuning their expanding parameters. The purpose of this paper is to point out effectiveness of these ideas, rather than to optimise the expander’s parameters. Nevertheless, we show that already slight improvement of known expander values modestly improve the current best approximation hardness value for TSP from 123/122 ([9]) to 117/116 . This provides a new motivation for study of expanding properties of random graphs in order to improve approximation lower bounds of TSP and other edge-weighted optimisation problems

    On Symmetric Encryption with Distinguishable Decryption Failures

    Get PDF
    We propose to relax the assumption that decryption failures are indistinguishable in security models for symmetric encryption. Our main purpose is to build models that better reflect the reality of cryptographic implementations, and to surface the security issues that arise from doing so. We systematically explore the consequences of this relaxation, with some surprising consequences for our understanding of this basic cryptographic primitive. Our results should be useful to practitioners who wish to build accurate models of their implementations and then analyse them. They should also be of value to more theoretical cryptographers proposing new encryption schemes, who, in an ideal world, would be compelled by this work to consider the possibility that their schemes might leak more than simple decryption failures
    • 

    corecore