801 research outputs found
On the Usefulness of Predicates
Motivated by the pervasiveness of strong inapproximability results for
Max-CSPs, we introduce a relaxed notion of an approximate solution of a
Max-CSP. In this relaxed version, loosely speaking, the algorithm is allowed to
replace the constraints of an instance by some other (possibly real-valued)
constraints, and then only needs to satisfy as many of the new constraints as
possible.
To be more precise, we introduce the following notion of a predicate
being \emph{useful} for a (real-valued) objective : given an almost
satisfiable Max- instance, there is an algorithm that beats a random
assignment on the corresponding Max- instance applied to the same sets of
literals. The standard notion of a nontrivial approximation algorithm for a
Max-CSP with predicate is exactly the same as saying that is useful for
itself.
We say that is useless if it is not useful for any . This turns out to
be equivalent to the following pseudo-randomness property: given an almost
satisfiable instance of Max- it is hard to find an assignment such that the
induced distribution on -bit strings defined by the instance is not
essentially uniform.
Under the Unique Games Conjecture, we give a complete and simple
characterization of useful Max-CSPs defined by a predicate: such a Max-CSP is
useless if and only if there is a pairwise independent distribution supported
on the satisfying assignments of the predicate. It is natural to also consider
the case when no negations are allowed in the CSP instance, and we derive a
similar complete characterization (under the UGC) there as well.
Finally, we also include some results and examples shedding additional light
on the approximability of certain Max-CSPs
On the Power of Many One-Bit Provers
We study the class of languages, denoted by \MIP[k, 1-\epsilon, s], which
have -prover games where each prover just sends a \emph{single} bit, with
completeness and soundness error . For the case that
(i.e., for the case of interactive proofs), Goldreich, Vadhan and Wigderson
({\em Computational Complexity'02}) demonstrate that \SZK exactly
characterizes languages having 1-bit proof systems with"non-trivial" soundness
(i.e., ). We demonstrate that for the case that
, 1-bit -prover games exhibit a significantly richer structure:
+ (Folklore) When , \MIP[k, 1-\epsilon, s]
= \BPP;
+ When , \MIP[k,
1-\epsilon, s] = \SZK;
+ When , \AM \subseteq \MIP[k, 1-\epsilon,
s];
+ For and sufficiently large , \MIP[k, 1-\epsilon, s]
\subseteq \EXP;
+ For , \MIP[k, 1, 1-\epsilon, s] = \NEXP.
As such, 1-bit -prover games yield a natural "quantitative" approach to
relating complexity classes such as \BPP,\SZK,\AM, \EXP, and \NEXP.
We leave open the question of whether a more fine-grained hierarchy (between
\AM and \NEXP) can be established for the case when
Intermediate problems in modular circuits satisfiability
In arXiv:1710.08163 a generalization of Boolean circuits to arbitrary finite
algebras had been introduced and applied to sketch P versus NP-complete
borderline for circuits satisfiability over algebras from congruence modular
varieties. However the problem for nilpotent (which had not been shown to be
NP-hard) but not supernilpotent algebras (which had been shown to be polynomial
time) remained open.
In this paper we provide a broad class of examples, lying in this grey area,
and show that, under the Exponential Time Hypothesis and Strong Exponential
Size Hypothesis (saying that Boolean circuits need exponentially many modular
counting gates to produce boolean conjunctions of any arity), satisfiability
over these algebras have intermediate complexity between and , where measures how much a nilpotent algebra
fails to be supernilpotent. We also sketch how these examples could be used as
paradigms to fill the nilpotent versus supernilpotent gap in general.
Our examples are striking in view of the natural strong connections between
circuits satisfiability and Constraint Satisfaction Problem for which the
dichotomy had been shown by Bulatov and Zhuk
Systems of Linear Equations over and Problems Parameterized Above Average
In the problem Max Lin, we are given a system of linear equations
with variables over in which each equation is assigned a
positive weight and we wish to find an assignment of values to the variables
that maximizes the excess, which is the total weight of satisfied equations
minus the total weight of falsified equations. Using an algebraic approach, we
obtain a lower bound for the maximum excess.
Max Lin Above Average (Max Lin AA) is a parameterized version of Max Lin
introduced by Mahajan et al. (Proc. IWPEC'06 and J. Comput. Syst. Sci. 75,
2009). In Max Lin AA all weights are integral and we are to decide whether the
maximum excess is at least , where is the parameter.
It is not hard to see that we may assume that no two equations in have
the same left-hand side and . Using our maximum excess results,
we prove that, under these assumptions, Max Lin AA is fixed-parameter tractable
for a wide special case: for an arbitrary fixed function
.
Max -Lin AA is a special case of Max Lin AA, where each equation has at
most variables. In Max Exact -SAT AA we are given a multiset of
clauses on variables such that each clause has variables and asked
whether there is a truth assignment to the variables that satisfies at
least clauses. Using our maximum excess results, we
prove that for each fixed , Max -Lin AA and Max Exact -SAT AA can
be solved in time This improves
-time algorithms for the two problems obtained by Gutin et
al. (IWPEC 2009) and Alon et al. (SODA 2010), respectively
A Hypergraph Dictatorship Test with Perfect Completeness
A hypergraph dictatorship test is first introduced by Samorodnitsky and
Trevisan and serves as a key component in their unique games based \PCP
construction. Such a test has oracle access to a collection of functions and
determines whether all the functions are the same dictatorship, or all their
low degree influences are Their test makes queries and has
amortized query complexity but has an inherent loss of
perfect completeness. In this paper we give an adaptive hypergraph dictatorship
test that achieves both perfect completeness and amortized query complexity
.Comment: Some minor correction
Evolution of ultraviolet vision in the largest avian radiation - the passerines
<p>Abstract</p> <p>Background</p> <p>Interspecific variation in avian colour vision falls into two discrete classes: violet sensitive (VS) and ultraviolet sensitive (UVS). They are characterised by the spectral sensitivity of the most shortwave sensitive of the four single cones, the SWS1, which is seemingly under direct control of as little as one amino acid substitution in the cone opsin protein. Changes in spectral sensitivity of the SWS1 are ecologically important, as they affect the abilities of birds to accurately assess potential mates, find food and minimise visibility of social signals to predators. Still, available data have indicated that shifts between classes are rare, with only four to five independent acquisitions of UV sensitivity in avian evolution.</p> <p>Results</p> <p>We have classified a large sample of passeriform species as VS or UVS from genomic DNA and mapped the evolution of this character on a passerine phylogeny inferred from published molecular sequence data. Sequencing a small gene fragment has allowed us to trace the trait changing from one stable state to another through the radiation of the passeriform birds. Their ancestor is hypothesised to be UVS. In the subsequent radiation, colour vision changed between UVS and VS at least eight times.</p> <p>Conclusions</p> <p>The phylogenetic distribution of SWS1 cone opsin types in Passeriformes reveals a much higher degree of complexity in avian colour vision evolution than what was previously indicated from the limited data available. Clades with variation in the colour vision system are nested among clades with a seemingly stable VS or UVS state, providing a rare opportunity to understand how an ecologically important trait under simple genetic control may co-evolve with, and be stabilised by, associated traits in a character complex.</p
When Can Limited Randomness Be Used in Repeated Games?
The central result of classical game theory states that every finite normal
form game has a Nash equilibrium, provided that players are allowed to use
randomized (mixed) strategies. However, in practice, humans are known to be bad
at generating random-like sequences, and true random bits may be unavailable.
Even if the players have access to enough random bits for a single instance of
the game their randomness might be insufficient if the game is played many
times.
In this work, we ask whether randomness is necessary for equilibria to exist
in finitely repeated games. We show that for a large class of games containing
arbitrary two-player zero-sum games, approximate Nash equilibria of the
-stage repeated version of the game exist if and only if both players have
random bits. In contrast, we show that there exists a class of
games for which no equilibrium exists in pure strategies, yet the -stage
repeated version of the game has an exact Nash equilibrium in which each player
uses only a constant number of random bits.
When the players are assumed to be computationally bounded, if cryptographic
pseudorandom generators (or, equivalently, one-way functions) exist, then the
players can base their strategies on "random-like" sequences derived from only
a small number of truly random bits. We show that, in contrast, in repeated
two-player zero-sum games, if pseudorandom generators \emph{do not} exist, then
random bits remain necessary for equilibria to exist
- âŠ