10,613 research outputs found

    Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy

    Full text link
    We consider quantum computations comprising only commuting gates, known as IQP computations, and provide compelling evidence that the task of sampling their output probability distributions is unlikely to be achievable by any efficient classical means. More specifically we introduce the class post-IQP of languages decided with bounded error by uniform families of IQP circuits with post-selection, and prove first that post-IQP equals the classical class PP. Using this result we show that if the output distributions of uniform IQP circuit families could be classically efficiently sampled, even up to 41% multiplicative error in the probabilities, then the infinite tower of classical complexity classes known as the polynomial hierarchy, would collapse to its third level. We mention some further results on the classical simulation properties of IQP circuit families, in particular showing that if the output distribution results from measurements on only O(log n) lines then it may in fact be classically efficiently sampled.Comment: 13 page

    Structure-Aware Sampling: Flexible and Accurate Summarization

    Full text link
    In processing large quantities of data, a fundamental problem is to obtain a summary which supports approximate query answering. Random sampling yields flexible summaries which naturally support subset-sum queries with unbiased estimators and well-understood confidence bounds. Classic sample-based summaries, however, are designed for arbitrary subset queries and are oblivious to the structure in the set of keys. The particular structure, such as hierarchy, order, or product space (multi-dimensional), makes range queries much more relevant for most analysis of the data. Dedicated summarization algorithms for range-sum queries have also been extensively studied. They can outperform existing sampling schemes in terms of accuracy on range queries per summary size. Their accuracy, however, rapidly degrades when, as is often the case, the query spans multiple ranges. They are also less flexible - being targeted for range sum queries alone - and are often quite costly to build and use. In this paper we propose and evaluate variance optimal sampling schemes that are structure-aware. These summaries improve over the accuracy of existing structure-oblivious sampling schemes on range queries while retaining the benefits of sample-based summaries: flexible summaries, with high accuracy on both range queries and arbitrary subset queries

    Sherali-Adams gaps, flow-cover inequalities and generalized configurations for capacity-constrained Facility Location

    Get PDF
    Metric facility location is a well-studied problem for which linear programming methods have been used with great success in deriving approximation algorithms. The capacity-constrained generalizations, such as capacitated facility location (CFL) and lower-bounded facility location (LBFL), have proved notorious as far as LP-based approximation is concerned: while there are local-search-based constant-factor approximations, there is no known linear relaxation with constant integrality gap. According to Williamson and Shmoys devising a relaxation-based approximation for \cfl\ is among the top 10 open problems in approximation algorithms. This paper advances significantly the state-of-the-art on the effectiveness of linear programming for capacity-constrained facility location through a host of impossibility results for both CFL and LBFL. We show that the relaxations obtained from the natural LP at Ω(n)\Omega(n) levels of the Sherali-Adams hierarchy have an unbounded gap, partially answering an open question of \cite{LiS13, AnBS13}. Here, nn denotes the number of facilities in the instance. Building on the ideas for this result, we prove that the standard CFL relaxation enriched with the generalized flow-cover valid inequalities \cite{AardalPW95} has also an unbounded gap. This disproves a long-standing conjecture of \cite{LeviSS12}. We finally introduce the family of proper relaxations which generalizes to its logical extreme the classic star relaxation and captures general configuration-style LPs. We characterize the behavior of proper relaxations for CFL and LBFL through a sharp threshold phenomenon.Comment: arXiv admin note: substantial text overlap with arXiv:1305.599

    Probabilistic Computability and Choice

    Get PDF
    We study the computational power of randomized computations on infinite objects, such as real numbers. In particular, we introduce the concept of a Las Vegas computable multi-valued function, which is a function that can be computed on a probabilistic Turing machine that receives a random binary sequence as auxiliary input. The machine can take advantage of this random sequence, but it always has to produce a correct result or to stop the computation after finite time if the random advice is not successful. With positive probability the random advice has to be successful. We characterize the class of Las Vegas computable functions in the Weihrauch lattice with the help of probabilistic choice principles and Weak Weak K\H{o}nig's Lemma. Among other things we prove an Independent Choice Theorem that implies that Las Vegas computable functions are closed under composition. In a case study we show that Nash equilibria are Las Vegas computable, while zeros of continuous functions with sign changes cannot be computed on Las Vegas machines. However, we show that the latter problem admits randomized algorithms with weaker failure recognition mechanisms. The last mentioned results can be interpreted such that the Intermediate Value Theorem is reducible to the jump of Weak Weak K\H{o}nig's Lemma, but not to Weak Weak K\H{o}nig's Lemma itself. These examples also demonstrate that Las Vegas computable functions form a proper superclass of the class of computable functions and a proper subclass of the class of non-deterministically computable functions. We also study the impact of specific lower bounds on the success probabilities, which leads to a strict hierarchy of classes. In particular, the classical technique of probability amplification fails for computations on infinite objects. We also investigate the dependency on the underlying probability space.Comment: Information and Computation (accepted for publication

    A Casual Tour Around a Circuit Complexity Bound

    Full text link
    I will discuss the recent proof that the complexity class NEXP (nondeterministic exponential time) lacks nonuniform ACC circuits of polynomial size. The proof will be described from the perspective of someone trying to discover it.Comment: 21 pages, 2 figures. An earlier version appeared in SIGACT News, September 201

    Black Box White Arrow

    Full text link
    The present paper proposes a new and systematic approach to the so-called black box group methods in computational group theory. Instead of a single black box, we consider categories of black boxes and their morphisms. This makes new classes of black box problems accessible. For example, we can enrich black box groups by actions of outer automorphisms. As an example of application of this technique, we construct Frobenius maps on black box groups of untwisted Lie type in odd characteristic (Section 6) and inverse-transpose automorphisms on black box groups encrypting (P)SLn(Fq){\rm (P)SL}_n(\mathbb{F}_q). One of the advantages of our approach is that it allows us to work in black box groups over finite fields of big characteristic. Another advantage is explanatory power of our methods; as an example, we explain Kantor's and Kassabov's construction of an involution in black box groups encrypting SL2(2n){\rm SL}_2(2^n). Due to the nature of our work we also have to discuss a few methodological issues of the black box group theory. The paper is further development of our text "Fifty shades of black" [arXiv:1308.2487], and repeats parts of it, but under a weaker axioms for black box groups.Comment: arXiv admin note: substantial text overlap with arXiv:1308.248

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice
    • 

    corecore