434 research outputs found

    An in-principle super-polynomial quantum advantage for approximating combinatorial optimization problems via computational learning theory

    Get PDF
    It is unclear to what extent quantum algorithms can outperform classical algorithms for problems of combinatorial optimization. In this work, by resorting to computational learning theory and cryptographic notions, we give a fully constructive proof that quantum computers feature a super-polynomial advantage over classical computers in approximating combinatorial optimization problems. Specifically, by building on seminal work by Kearns and Valiant, we provide special instances that are hard for classical computers to approximate up to polynomial factors. Simultaneously, we give a quantum algorithm that can efficiently approximate the optimal solution within a polynomial factor. The quantum advantage in this work is ultimately borrowed from Shor’s quantum algorithm for factoring. We introduce an explicit and comprehensive end-to-end construction for the advantage bearing instances. For these instances, quantum computers have, in principle, the power to approximate combinatorial optimization solutions beyond the reach of classical efficient algorithms

    On the Learnability of Shuffle Ideals

    Get PDF
    PAC learning of unrestricted regular languages is long known to be a difficult problem. The class of shuffle ideals is a very restricted subclass of regular languages, where the shuffle ideal generated by a string u is the collection of all strings containing u as a subsequence. This fundamental language family is of theoretical interest in its own right and provides the building blocks for other important language families. Despite its apparent simplicity, the class of shuffle ideals appears quite difficult to learn. In particular, just as for unrestricted regular languages, the class is not properly PAC learnable in polynomial time if RP 6= NP, and PAC learning the class improperly in polynomial time would imply polynomial time algorithms for certain fundamental problems in cryptography. In the positive direction, we give an efficient algorithm for properly learning shuffle ideals in the statistical query (and therefore also PAC) model under the uniform distribution.T-Party Projec

    From average case complexity to improper learning complexity

    Full text link
    The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {\em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from NP\mathbf{NP}-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning DNF\mathrm{DNF}'s is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of ω(1)\omega(1) halfspaces is hard.Comment: 34 page

    An in-principle super-polynomial quantum advantage for approximating combinatorial optimization problems

    Full text link
    Combinatorial optimization - a field of research addressing problems that feature strongly in a wealth of scientific and industrial contexts - has been identified as one of the core potential fields of applicability of quantum computers. It is still unclear, however, to what extent quantum algorithms can actually outperform classical algorithms for this type of problems. In this work, by resorting to computational learning theory and cryptographic notions, we prove that quantum computers feature an in-principle super-polynomial advantage over classical computers in approximating solutions to combinatorial optimization problems. Specifically, building on seminal work by Kearns and Valiant and introducing a new reduction, we identify special types of problems that are hard for classical computers to approximate up to polynomial factors. At the same time, we give a quantum algorithm that can efficiently approximate the optimal solution within a polynomial factor. The core of the quantum advantage discovered in this work is ultimately borrowed from Shor's quantum algorithm for factoring. Concretely, we prove a super-polynomial advantage for approximating special instances of the so-called integer programming problem. In doing so, we provide an explicit end-to-end construction for advantage bearing instances. This result shows that quantum devices have, in principle, the power to approximate combinatorial optimization solutions beyond the reach of classical efficient algorithms. Our results also give clear guidance on how to construct such advantage-bearing problem instances.Comment: 5+13 pages, 5 figures, presentation improve

    Minimizing nfa's and regular expressions

    Get PDF
    AbstractWe show inapproximability results concerning minimization of nondeterministic finite automata (nfa's) as well as of regular expressions relative to given nfa's, regular expressions or deterministic finite automata (dfa's).We show that it is impossible to efficiently minimize a given nfa or regular expression with n states, transitions, respectively symbols within the factor o(n), unless P=PSPACE. For the unary case, we show that for any δ>0 it is impossible to efficiently construct an approximately minimal nfa or regular expression within the factor n1−δ, unless P=NP.Our inapproximability results for a given dfa with n states are based on cryptographic assumptions and we show that any efficient algorithm will have an approximation factor of at least npoly(logn). Our setup also allows us to analyze the minimum consistent dfa problem

    On Low-End Obfuscation and Learning

    Get PDF
    Most recent works on cryptographic obfuscation focus on the high-end regime of obfuscating general circuits while guaranteeing computational indistinguishability between functionally equivalent circuits. Motivated by the goals of simplicity and efficiency, we initiate a systematic study of "low-end" obfuscation, focusing on simpler representation models and information-theoretic notions of security. We obtain the following results. - Positive results via "white-box" learning. We present a general technique for obtaining perfect indistinguishability obfuscation from exact learning algorithms that are given restricted access to the representation of the input function. We demonstrate the usefulness of this approach by obtaining simple obfuscation for decision trees and multilinear read-k arithmetic formulas. - Negative results via PAC learning. A proper obfuscation scheme obfuscates programs from a class C by programs from the same class. Assuming the existence of one-way functions, we show that there is no proper indistinguishability obfuscation scheme for k-CNF formulas for any constant k ? 3; in fact, even obfuscating 3-CNF by k-CNF is impossible. This result applies even to computationally secure obfuscation, and makes an unexpected use of PAC learning in the context of negative results for obfuscation. - Separations. We study the relations between different information-theoretic notions of indistinguishability obfuscation, giving cryptographic evidence for separations between them
    • …
    corecore