2,861 research outputs found
On the complexity of probabilistic trials for hidden satisfiability problems
What is the minimum amount of information and time needed to solve 2SAT? When
the instance is known, it can be solved in polynomial time, but is this also
possible without knowing the instance? Bei, Chen and Zhang (STOC '13)
considered a model where the input is accessed by proposing possible
assignments to a special oracle. This oracle, on encountering some constraint
unsatisfied by the proposal, returns only the constraint index. It turns out
that, in this model, even 1SAT cannot be solved in polynomial time unless P=NP.
Hence, we consider a model in which the input is accessed by proposing
probability distributions over assignments to the variables. The oracle then
returns the index of the constraint that is most likely to be violated by this
distribution. We show that the information obtained this way is sufficient to
solve 1SAT in polynomial time, even when the clauses can be repeated. For 2SAT,
as long as there are no repeated clauses, in polynomial time we can even learn
an equivalent formula for the hidden instance and hence also solve it.
Furthermore, we extend these results to the quantum regime. We show that in
this setting 1QSAT can be solved in polynomial time up to constant precision,
and 2QSAT can be learnt in polynomial time up to inverse polynomial precision.Comment: 24 pages, 2 figures. To appear in the 41st International Symposium on
Mathematical Foundations of Computer Scienc
The Quantum PCP Conjecture
The classical PCP theorem is arguably the most important achievement of
classical complexity theory in the past quarter century. In recent years,
researchers in quantum computational complexity have tried to identify
approaches and develop tools that address the question: does a quantum version
of the PCP theorem hold? The story of this study starts with classical
complexity and takes unexpected turns providing fascinating vistas on the
foundations of quantum mechanics, the global nature of entanglement and its
topological properties, quantum error correction, information theory, and much
more; it raises questions that touch upon some of the most fundamental issues
at the heart of our understanding of quantum mechanics. At this point, the jury
is still out as to whether or not such a theorem holds. This survey aims to
provide a snapshot of the status in this ongoing story, tailored to a general
theory-of-CS audience.Comment: 45 pages, 4 figures, an enhanced version of the SIGACT guest column
from Volume 44 Issue 2, June 201
From the Kochen-Specker theorem to noncontextuality inequalities without assuming determinism
The Kochen-Specker theorem demonstrates that it is not possible to reproduce
the predictions of quantum theory in terms of a hidden variable model where the
hidden variables assign a value to every projector deterministically and
noncontextually. A noncontextual value-assignment to a projector is one that
does not depend on which other projectors - the context - are measured together
with it. Using a generalization of the notion of noncontextuality that applies
to both measurements and preparations, we propose a scheme for deriving
inequalities that test whether a given set of experimental statistics is
consistent with a noncontextual model. Unlike previous inequalities inspired by
the Kochen-Specker theorem, we do not assume that the value-assignments are
deterministic and therefore in the face of a violation of our inequality, the
possibility of salvaging noncontextuality by abandoning determinism is no
longer an option. Our approach is operational in the sense that it does not
presume quantum theory: a violation of our inequality implies the impossibility
of a noncontextual model for any operational theory that can account for the
experimental observations, including any successor to quantum theory.Comment: 5+8 pages, 4+3 figures. Comments are welcome
On the role of synaptic stochasticity in training low-precision neural networks
Stochasticity and limited precision of synaptic weights in neural network
models are key aspects of both biological and hardware modeling of learning
processes. Here we show that a neural network model with stochastic binary
weights naturally gives prominence to exponentially rare dense regions of
solutions with a number of desirable properties such as robustness and good
generalization performance, while typical solutions are isolated and hard to
find. Binary solutions of the standard perceptron problem are obtained from a
simple gradient descent procedure on a set of real values parametrizing a
probability distribution over the binary synapses. Both analytical and
numerical results are presented. An algorithmic extension aimed at training
discrete deep neural networks is also investigated.Comment: 7 pages + 14 pages of supplementary materia
On the computational complexity of detecting possibilistic locality
The proofs of quantum nonlocality due to Greenberger, Horne and Zeilinger and due to Hardy are qualitatively different from that of Bell insofar as they rely only on a consideration of whether events are possible or impossible, rather than relying on specific experimental probabilities. We consider the scenario of a bipartite nonlocality experiment, in which two separated experimenters each have access to some measurements they can perform on a system. In a physical theory, some outcomes of this experiment will be labelled possible, others impossible, and an assignment of the values 0 (impossible) and 1 (possible) to these different outcomes forms a table of possibilities. Here, we consider the computational task of determining whether or not a given table of possibilities constitutes a departure from possibilistic local realism. By considering the case in which one party has access to measurements with two outcomes and the other three, it is possible to see at exactly which point this task becomes computationally difficult
- …