100,892 research outputs found
Hardness of approximation for quantum problems
The polynomial hierarchy plays a central role in classical complexity theory.
Here, we define a quantum generalization of the polynomial hierarchy, and
initiate its study. We show that not only are there natural complete problems
for the second level of this quantum hierarchy, but that these problems are in
fact hard to approximate. Using these techniques, we also obtain hardness of
approximation for the class QCMA. Our approach is based on the use of
dispersers, and is inspired by the classical results of Umans regarding
hardness of approximation for the second level of the classical polynomial
hierarchy [Umans, FOCS 1999]. The problems for which we prove hardness of
approximation for include, among others, a quantum version of the Succinct Set
Cover problem, and a variant of the local Hamiltonian problem with hybrid
classical-quantum ground states.Comment: 21 pages, 1 figure, extended abstract appeared in Proceedings of the
39th International Colloquium on Automata, Languages and Programming (ICALP),
pages 387-398, Springer, 201
Higher-order principal component analysis for the approximation of tensors in tree-based low-rank formats
This paper is concerned with the approximation of tensors using tree-based
tensor formats, which are tensor networks whose graphs are dimension partition
trees. We consider Hilbert tensor spaces of multivariate functions defined on a
product set equipped with a probability measure. This includes the case of
multidimensional arrays corresponding to finite product sets. We propose and
analyse an algorithm for the construction of an approximation using only point
evaluations of a multivariate function, or evaluations of some entries of a
multidimensional array. The algorithm is a variant of higher-order singular
value decomposition which constructs a hierarchy of subspaces associated with
the different nodes of the tree and a corresponding hierarchy of interpolation
operators. Optimal subspaces are estimated using empirical principal component
analysis of interpolations of partial random evaluations of the function. The
algorithm is able to provide an approximation in any tree-based format with
either a prescribed rank or a prescribed relative error, with a number of
evaluations of the order of the storage complexity of the approximation format.
Under some assumptions on the estimation of principal components, we prove that
the algorithm provides either a quasi-optimal approximation with a given rank,
or an approximation satisfying the prescribed relative error, up to constants
depending on the tree and the properties of interpolation operators. The
analysis takes into account the discretization errors for the approximation of
infinite-dimensional tensors. Several numerical examples illustrate the main
results and the behavior of the algorithm for the approximation of
high-dimensional functions using hierarchical Tucker or tensor train tensor
formats, and the approximation of univariate functions using tensorization
Quantum Commuting Circuits and Complexity of Ising Partition Functions
Instantaneous quantum polynomial-time (IQP) computation is a class of quantum
computation consisting only of commuting two-qubit gates and is not universal
in the sense of standard quantum computation. Nevertheless, it has been shown
that if there is a classical algorithm that can simulate IQP efficiently, the
polynomial hierarchy (PH) collapses at the third level, which is highly
implausible. However, the origin of the classical intractability is still less
understood. Here we establish a relationship between IQP and computational
complexity of the partition functions of Ising models. We apply the established
relationship in two opposite directions. One direction is to find subclasses of
IQP that are classically efficiently simulatable in the strong sense, by using
exact solvability of certain types of Ising models. Another direction is
applying quantum computational complexity of IQP to investigate (im)possibility
of efficient classical approximations of Ising models with imaginary coupling
constants. Specifically, we show that there is no fully polynomial randomized
approximation scheme (FPRAS) for Ising models with almost all imaginary
coupling constants even on a planar graph of a bounded degree, unless the PH
collapses at the third level. Furthermore, we also show a multiplicative
approximation of such a class of Ising partition functions is at least as hard
as a multiplicative approximation for the output distribution of an arbitrary
quantum circuit.Comment: 36 pages, 5 figure
Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability
Previously referred to as `miraculous' in the scientific literature because
of its powerful properties and its wide application as optimal solution to the
problem of induction/inference, (approximations to) Algorithmic Probability
(AP) and the associated Universal Distribution are (or should be) of the
greatest importance in science. Here we investigate the emergence, the rates of
emergence and convergence, and the Coding-theorem like behaviour of AP in
Turing-subuniversal models of computation. We investigate empirical
distributions of computing models in the Chomsky hierarchy. We introduce
measures of algorithmic probability and algorithmic complexity based upon
resource-bounded computation, in contrast to previously thoroughly investigated
distributions produced from the output distribution of Turing machines. This
approach allows for numerical approximations to algorithmic
(Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a
computational hierarchy. We demonstrate that all these estimations are
correlated in rank and that they converge both in rank and values as a function
of computational power, despite fundamental differences between computational
models. In the context of natural processes that operate below the Turing
universal level because of finite resources and physical degradation, the
investigation of natural biases stemming from algorithmic rules may shed light
on the distribution of outcomes. We show that up to 60\% of the
simplicity/complexity bias in distributions produced even by the weakest of the
computational models can be accounted for by Algorithmic Probability in its
approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity
calculator: http://complexitycalculator.com
Faster SDP hierarchy solvers for local rounding algorithms
Convex relaxations based on different hierarchies of linear/semi-definite
programs have been used recently to devise approximation algorithms for various
optimization problems. The approximation guarantee of these algorithms improves
with the number of {\em rounds} in the hierarchy, though the complexity of
solving (or even writing down the solution for) the 'th level program grows
as where is the input size.
In this work, we observe that many of these algorithms are based on {\em
local} rounding procedures that only use a small part of the SDP solution (of
size instead of ). We give an algorithm to
find the requisite portion in time polynomial in its size. The challenge in
achieving this is that the required portion of the solution is not fixed a
priori but depends on other parts of the solution, sometimes in a complicated
iterative manner.
Our solver leads to time algorithms to obtain the same
guarantees in many cases as the earlier time algorithms based on
rounds of the Lasserre hierarchy. In particular, guarantees based on rounds can be realized in polynomial time.
We develop and describe our algorithm in a fairly general abstract framework.
The main technical tool in our work, which might be of independent interest in
convex optimization, is an efficient ellipsoid algorithm based separation
oracle for convex programs that can output a {\em certificate of infeasibility
with restricted support}. This is used in a recursive manner to find a sequence
of consistent points in nested convex bodies that "fools" local rounding
algorithms.Comment: 30 pages, 8 figure
Descriptive complexity of #P functions : A new perspective
We introduce a new framework for a descriptive complexity approach to arithmetic computations. We define a hierarchy of classes based on the idea of counting assignments to free function variables in first-order formulae. We completely determine the inclusion structure and show that #P and #AC0 appear as classes of this hierarchy. In this way, we unconditionally place #AC0 properly in a strict hierarchy of arithmetic classes within #P. Furthermore, we show that some of our classes admit efficient approximation in the sense of FPRAS. We compare our classes with a hierarchy within #P defined in a model-theoretic way by Saluja et al. and argue that our approach is better suited to study arithmetic circuit classes such as #AC0 which can be descriptively characterized as a class in our framework.Peer reviewe
- …