21,411 research outputs found

    Progressive Analytics: A Computation Paradigm for Exploratory Data Analysis

    Get PDF
    Exploring data requires a fast feedback loop from the analyst to the system, with a latency below about 10 seconds because of human cognitive limitations. When data becomes large or analysis becomes complex, sequential computations can no longer be completed in a few seconds and data exploration is severely hampered. This article describes a novel computation paradigm called Progressive Computation for Data Analysis or more concisely Progressive Analytics, that brings at the programming language level a low-latency guarantee by performing computations in a progressive fashion. Moving this progressive computation at the language level relieves the programmer of exploratory data analysis systems from implementing the whole analytics pipeline in a progressive way from scratch, streamlining the implementation of scalable exploratory data analysis systems. This article describes the new paradigm through a prototype implementation called ProgressiVis, and explains the requirements it implies through examples.Comment: 10 page

    Learning with Errors is easy with quantum samples

    Full text link
    Learning with Errors is one of the fundamental problems in computational learning theory and has in the last years become the cornerstone of post-quantum cryptography. In this work, we study the quantum sample complexity of Learning with Errors and show that there exists an efficient quantum learning algorithm (with polynomial sample and time complexity) for the Learning with Errors problem where the error distribution is the one used in cryptography. While our quantum learning algorithm does not break the LWE-based encryption schemes proposed in the cryptography literature, it does have some interesting implications for cryptography: first, when building an LWE-based scheme, one needs to be careful about the access to the public-key generation algorithm that is given to the adversary; second, our algorithm shows a possible way for attacking LWE-based encryption by using classical samples to approximate the quantum sample state, since then using our quantum learning algorithm would solve LWE

    A Nearly Optimal Lower Bound on the Approximate Degree of AC0^0

    Full text link
    The approximate degree of a Boolean function f ⁣:{1,1}n{1,1}f \colon \{-1, 1\}^n \rightarrow \{-1, 1\} is the least degree of a real polynomial that approximates ff pointwise to error at most 1/31/3. We introduce a generic method for increasing the approximate degree of a given function, while preserving its computability by constant-depth circuits. Specifically, we show how to transform any Boolean function ff with approximate degree dd into a function FF on O(npolylog(n))O(n \cdot \operatorname{polylog}(n)) variables with approximate degree at least D=Ω(n1/3d2/3)D = \Omega(n^{1/3} \cdot d^{2/3}). In particular, if d=n1Ω(1)d= n^{1-\Omega(1)}, then DD is polynomially larger than dd. Moreover, if ff is computed by a polynomial-size Boolean circuit of constant depth, then so is FF. By recursively applying our transformation, for any constant δ>0\delta > 0 we exhibit an AC0^0 function of approximate degree Ω(n1δ)\Omega(n^{1-\delta}). This improves over the best previous lower bound of Ω(n2/3)\Omega(n^{2/3}) due to Aaronson and Shi (J. ACM 2004), and nearly matches the trivial upper bound of nn that holds for any function. Our lower bounds also apply to (quasipolynomial-size) DNFs of polylogarithmic width. We describe several applications of these results. We give: * For any constant δ>0\delta > 0, an Ω(n1δ)\Omega(n^{1-\delta}) lower bound on the quantum communication complexity of a function in AC0^0. * A Boolean function ff with approximate degree at least C(f)2o(1)C(f)^{2-o(1)}, where C(f)C(f) is the certificate complexity of ff. This separation is optimal up to the o(1)o(1) term in the exponent. * Improved secret sharing schemes with reconstruction procedures in AC0^0.Comment: 40 pages, 1 figur

    Continuous-variable quantum neural networks

    Full text link
    We introduce a general method for building neural networks on quantum computers. The quantum neural network is a variational quantum circuit built in the continuous-variable (CV) architecture, which encodes quantum information in continuous degrees of freedom such as the amplitudes of the electromagnetic field. This circuit contains a layered structure of continuously parameterized gates which is universal for CV quantum computation. Affine transformations and nonlinear activation functions, two key elements in neural networks, are enacted in the quantum network using Gaussian and non-Gaussian gates, respectively. The non-Gaussian gates provide both the nonlinearity and the universality of the model. Due to the structure of the CV model, the CV quantum neural network can encode highly nonlinear transformations while remaining completely unitary. We show how a classical network can be embedded into the quantum formalism and propose quantum versions of various specialized model such as convolutional, recurrent, and residual networks. Finally, we present numerous modeling experiments built with the Strawberry Fields software library. These experiments, including a classifier for fraud detection, a network which generates Tetris images, and a hybrid classical-quantum autoencoder, demonstrate the capability and adaptability of CV quantum neural networks
    corecore