189 research outputs found
Rational Proofs with Multiple Provers
Interactive proofs (IP) model a world where a verifier delegates computation
to an untrustworthy prover, verifying the prover's claims before accepting
them. IP protocols have applications in areas such as verifiable computation
outsourcing, computation delegation, cloud computing. In these applications,
the verifier may pay the prover based on the quality of his work. Rational
interactive proofs (RIP), introduced by Azar and Micali (2012), are an
interactive-proof system with payments, in which the prover is rational rather
than untrustworthy---he may lie, but only to increase his payment. Rational
proofs leverage the provers' rationality to obtain simple and efficient
protocols. Azar and Micali show that RIP=IP(=PSAPCE). They leave the question
of whether multiple provers are more powerful than a single prover for rational
and classical proofs as an open problem.
In this paper, we introduce multi-prover rational interactive proofs (MRIP).
Here, a verifier cross-checks the provers' answers with each other and pays
them according to the messages exchanged. The provers are cooperative and
maximize their total expected payment if and only if the verifier learns the
correct answer to the problem. We further refine the model of MRIP to
incorporate utility gap, which is the loss in payment suffered by provers who
mislead the verifier to the wrong answer.
We define the class of MRIP protocols with constant, noticeable and
negligible utility gaps. We give tight characterization for all three MRIP
classes. We show that under standard complexity-theoretic assumptions, MRIP is
more powerful than both RIP and MIP ; and this is true even the utility gap is
required to be constant. Furthermore the full power of each MRIP class can be
achieved using only two provers and three rounds. (A preliminary version of
this paper appeared at ITCS 2016. This is the full version that contains new
results.)Comment: Proceedings of the 2016 ACM Conference on Innovations in Theoretical
Computer Science. ACM, 201
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
Computational Complexity in Tile Self-Assembly
One of the most fundamental and well-studied problems in Tile Self-Assembly is the Unique Assembly Verification (UAV) problem. This algorithmic problem asks whether a given tile system uniquely assembles a specific assembly. The complexity of this problem in the 2-Handed Assembly Model (2HAM) at a constant temperature is a long-standing open problem since the model was introduced. Previously, only membership in the class coNP was known and that the problem is in P if the temperature is one (Ï„ = 1). The problem is known to be hard for many generalizations of the model, such as allowing one step into the third dimension or allowing the temperature of the system to be a variable, but the most fundamental version has remained open.
In this Thesis I will cover verification problems in different models of self-assembly leading to the proof that the UAV problem in the 2HAM is hard even with a small constant temperature (Ï„ = 2), and finally answer the complexity of this problem (open since 2013). Further, this result proves that UAV in the staged self-assembly model is coNP-complete with a single bin and stage (open since 2007), and that UAV in the q-tile model is also coNP-complete (open since 2004). We reduce from Monotone Planar 3-SAT with Neighboring Variable Pairs, a special case of 3SAT recently proven to be NP-hard
Algorithmic Cheap Talk
The literature on strategic communication originated with the influential
cheap talk model, which precedes the Bayesian persuasion model by three
decades. This model describes an interaction between two agents: sender and
receiver. The sender knows some state of the world which the receiver does not
know, and tries to influence the receiver's action by communicating a cheap
talk message to the receiver.
This paper initiates the algorithmic study of cheap talk in a finite
environment (i.e., a finite number of states and receiver's possible actions).
We first prove that approximating the sender-optimal or the welfare-maximizing
cheap talk equilibrium up to a certain additive constant or multiplicative
factor is NP-hard. Fortunately, we identify three naturally-restricted cases
that admit efficient algorithms for finding a sender-optimal equilibrium. These
include a state-independent sender's utility structure, a constant number of
states or a receiver having only two actions
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
- …