5,758 research outputs found

    The Power of Quantum Fourier Sampling

    Get PDF
    A line of work initiated by Terhal and DiVincenzo and Bremner, Jozsa, and Shepherd, shows that quantum computers can efficiently sample from probability distributions that cannot be exactly sampled efficiently on a classical computer, unless the PH collapses. Aaronson and Arkhipov take this further by considering a distribution that can be sampled efficiently by linear optical quantum computation, that under two feasible conjectures, cannot even be approximately sampled classically within bounded total variation distance, unless the PH collapses. In this work we use Quantum Fourier Sampling to construct a class of distributions that can be sampled by a quantum computer. We then argue that these distributions cannot be approximately sampled classically, unless the PH collapses, under variants of the Aaronson and Arkhipov conjectures. In particular, we show a general class of quantumly sampleable distributions each of which is based on an "Efficiently Specifiable" polynomial, for which a classical approximate sampler implies an average-case approximation. This class of polynomials contains the Permanent but also includes, for example, the Hamiltonian Cycle polynomial, and many other familiar #P-hard polynomials. Although our construction, unlike that proposed by Aaronson and Arkhipov, likely requires a universal quantum computer, we are able to use this additional power to weaken the conjectures needed to prove approximate sampling hardness results

    Pseudorandomness and the Minimum Circuit Size Problem

    Get PDF

    Complexity classification of two-qubit commuting hamiltonians

    Get PDF
    We classify two-qubit commuting Hamiltonians in terms of their computational complexity. Suppose one has a two-qubit commuting Hamiltonian H which one can apply to any pair of qubits, starting in a computational basis state. We prove a dichotomy theorem: either this model is efficiently classically simulable or it allows one to sample from probability distributions which cannot be sampled from classically unless the polynomial hierarchy collapses. Furthermore, the only simulable Hamiltonians are those which fail to generate entanglement. This shows that generic two-qubit commuting Hamiltonians can be used to perform computational tasks which are intractable for classical computers under plausible assumptions. Our proof makes use of new postselection gadgets and Lie theory.Comment: 34 page

    Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier

    Full text link
    This paper explores a surprising equivalence between two seemingly-distinct convex optimization methods. We show that simulated annealing, a well-studied random walk algorithms, is directly equivalent, in a certain sense, to the central path interior point algorithm for the the entropic universal barrier function. This connection exhibits several benefits. First, we are able improve the state of the art time complexity for convex optimization under the membership oracle model. We improve the analysis of the randomized algorithm of Kalai and Vempala by utilizing tools developed by Nesterov and Nemirovskii that underly the central path following interior point algorithm. We are able to tighten the temperature schedule for simulated annealing which gives an improved running time, reducing by square root of the dimension in certain instances. Second, we get an efficient randomized interior point method with an efficiently computable universal barrier for any convex set described by a membership oracle. Previously, efficiently computable barriers were known only for particular convex sets

    Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability

    Full text link
    Previously referred to as `miraculous' in the scientific literature because of its powerful properties and its wide application as optimal solution to the problem of induction/inference, (approximations to) Algorithmic Probability (AP) and the associated Universal Distribution are (or should be) of the greatest importance in science. Here we investigate the emergence, the rates of emergence and convergence, and the Coding-theorem like behaviour of AP in Turing-subuniversal models of computation. We investigate empirical distributions of computing models in the Chomsky hierarchy. We introduce measures of algorithmic probability and algorithmic complexity based upon resource-bounded computation, in contrast to previously thoroughly investigated distributions produced from the output distribution of Turing machines. This approach allows for numerical approximations to algorithmic (Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a computational hierarchy. We demonstrate that all these estimations are correlated in rank and that they converge both in rank and values as a function of computational power, despite fundamental differences between computational models. In the context of natural processes that operate below the Turing universal level because of finite resources and physical degradation, the investigation of natural biases stemming from algorithmic rules may shed light on the distribution of outcomes. We show that up to 60\% of the simplicity/complexity bias in distributions produced even by the weakest of the computational models can be accounted for by Algorithmic Probability in its approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity calculator: http://complexitycalculator.com

    Impossibility of independence amplification in Kolmogorov complexity theory

    Full text link
    The paper studies randomness extraction from sources with bounded independence and the issue of independence amplification of sources, using the framework of Kolmogorov complexity. The dependency of strings xx and yy is dep(x,y)=max{C(x)C(xy),C(y)C(yx)}{\rm dep}(x,y) = \max\{C(x) - C(x \mid y), C(y) - C(y\mid x)\}, where C()C(\cdot) denotes the Kolmogorov complexity. It is shown that there exists a computable Kolmogorov extractor ff such that, for any two nn-bit strings with complexity s(n)s(n) and dependency α(n)\alpha(n), it outputs a string of length s(n)s(n) with complexity s(n)α(n)s(n)- \alpha(n) conditioned by any one of the input strings. It is proven that the above are the optimal parameters a Kolmogorov extractor can achieve. It is shown that independence amplification cannot be effectively realized. Specifically, if (after excluding a trivial case) there exist computable functions f1f_1 and f2f_2 such that dep(f1(x,y),f2(x,y))β(n){\rm dep}(f_1(x,y), f_2(x,y)) \leq \beta(n) for all nn-bit strings xx and yy with dep(x,y)α(n){\rm dep}(x,y) \leq \alpha(n), then β(n)α(n)O(logn)\beta(n) \geq \alpha(n) - O(\log n)
    corecore