1,625 research outputs found

    Strong ETH Breaks With Merlin and Arthur: Short Non-Interactive Proofs of Batch Evaluation

    Get PDF
    We present an efficient proof system for Multipoint Arithmetic Circuit Evaluation: for every arithmetic circuit C(x1,…,xn)C(x_1,\ldots,x_n) of size ss and degree dd over a field F{\mathbb F}, and any inputs a1,…,aK∈Fna_1,\ldots,a_K \in {\mathbb F}^n, βˆ™\bullet the Prover sends the Verifier the values C(a1),…,C(aK)∈FC(a_1), \ldots, C(a_K) \in {\mathbb F} and a proof of O~(Kβ‹…d)\tilde{O}(K \cdot d) length, and βˆ™\bullet the Verifier tosses poly(log⁑(dK∣F∣/Ξ΅))\textrm{poly}(\log(dK|{\mathbb F}|/\varepsilon)) coins and can check the proof in about O~(Kβ‹…(n+d)+s)\tilde{O}(K \cdot(n + d) + s) time, with probability of error less than Ξ΅\varepsilon. For small degree dd, this "Merlin-Arthur" proof system (a.k.a. MA-proof system) runs in nearly-linear time, and has many applications. For example, we obtain MA-proof systems that run in cnc^{n} time (for various c<2c < 2) for the Permanent, #\#Circuit-SAT for all sublinear-depth circuits, counting Hamiltonian cycles, and infeasibility of 00-11 linear programs. In general, the value of any polynomial in Valiant's class VP{\sf VP} can be certified faster than "exhaustive summation" over all possible assignments. These results strongly refute a Merlin-Arthur Strong ETH and Arthur-Merlin Strong ETH posed by Russell Impagliazzo and others. We also give a three-round (AMA) proof system for quantified Boolean formulas running in 22n/3+o(n)2^{2n/3+o(n)} time, nearly-linear time MA-proof systems for counting orthogonal vectors in a collection and finding Closest Pairs in the Hamming metric, and a MA-proof system running in nk/2+O(1)n^{k/2+O(1)}-time for counting kk-cliques in graphs. We point to some potential future directions for refuting the Nondeterministic Strong ETH.Comment: 17 page

    On quantum backpropagation, information reuse, and cheating measurement collapse

    Full text link
    The success of modern deep learning hinges on the ability to train neural networks at scale. Through clever reuse of intermediate information, backpropagation facilitates training through gradient computation at a total cost roughly proportional to running the function, rather than incurring an additional factor proportional to the number of parameters - which can now be in the trillions. Naively, one expects that quantum measurement collapse entirely rules out the reuse of quantum information as in backpropagation. But recent developments in shadow tomography, which assumes access to multiple copies of a quantum state, have challenged that notion. Here, we investigate whether parameterized quantum models can train as efficiently as classical neural networks. We show that achieving backpropagation scaling is impossible without access to multiple copies of a state. With this added ability, we introduce an algorithm with foundations in shadow tomography that matches backpropagation scaling in quantum resources while reducing classical auxiliary computational costs to open problems in shadow tomography. These results highlight the nuance of reusing quantum information for practical purposes and clarify the unique difficulties in training large quantum models, which could alter the course of quantum machine learning.Comment: 29 pages, 2 figure
    • …
    corecore