7,283 research outputs found

    Advances in quantum machine learning

    Get PDF
    Here we discuss advances in the field of quantum machine learning. The following document offers a hybrid discussion; both reviewing the field as it is currently, and suggesting directions for further research. We include both algorithms and experimental implementations in the discussion. The field's outlook is generally positive, showing significant promise. However, we believe there are appreciable hurdles to overcome before one can claim that it is a primary application of quantum computation.Comment: 38 pages, 17 Figure

    Quantum machine learning: a classical perspective

    Get PDF
    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde

    Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics

    Get PDF
    Quantum computing is powerful because unitary operators describing the time-evolution of a quantum system have exponential size in terms of the number of qubits present in the system. We develop a new "Singular value transformation" algorithm capable of harnessing this exponential advantage, that can apply polynomial transformations to the singular values of a block of a unitary, generalizing the optimal Hamiltonian simulation results of Low and Chuang. The proposed quantum circuits have a very simple structure, often give rise to optimal algorithms and have appealing constant factors, while usually only use a constant number of ancilla qubits. We show that singular value transformation leads to novel algorithms. We give an efficient solution to a certain "non-commutative" measurement problem and propose a new method for singular value estimation. We also show how to exponentially improve the complexity of implementing fractional queries to unitaries with a gapped spectrum. Finally, as a quantum machine learning application we show how to efficiently implement principal component regression. "Singular value transformation" is conceptually simple and efficient, and leads to a unified framework of quantum algorithms incorporating a variety of quantum speed-ups. We illustrate this by showing how it generalizes a number of prominent quantum algorithms, including: optimal Hamiltonian simulation, implementing the Moore-Penrose pseudoinverse with exponential precision, fixed-point amplitude amplification, robust oblivious amplitude amplification, fast QMA amplification, fast quantum OR lemma, certain quantum walk results and several quantum machine learning algorithms. In order to exploit the strengths of the presented method it is useful to know its limitations too, therefore we also prove a lower bound on the efficiency of singular value transformation, which often gives optimal bounds.Comment: 67 pages, 1 figur

    Quantum Hopfield neural network

    Full text link
    Quantum computing allows for the potential of significant advancements in both the speed and the capacity of widely used machine learning techniques. Here we employ quantum algorithms for the Hopfield network, which can be used for pattern recognition, reconstruction, and optimization as a realization of a content-addressable memory system. We show that an exponentially large network can be stored in a polynomial number of quantum bits by encoding the network into the amplitudes of quantum states. By introducing a classical technique for operating the Hopfield network, we can leverage quantum algorithms to obtain a quantum computational complexity that is logarithmic in the dimension of the data. We also present an application of our method as a genetic sequence recognizer.Comment: 13 pages, 3 figures, final versio

    Time Evolution and Deterministic Optimisation of Correlator Product States

    Get PDF
    We study a restricted class of correlator product states (CPS) for a spin-half chain in which each spin is contained in just two overlapping plaquettes. This class is also a restriction upon matrix product states (MPS) with local dimension 2n2^n (nn being the size of the overlapping regions of plaquettes) equal to the bond dimension. We investigate the trade-off between gains in efficiency due to this restriction against losses in fidelity. The time-dependent variational principle formulated for these states is numerically very stable. Moreover, it shows significant gains in efficiency compared to the naively related matrix product states - the evolution or optimisation scales as 23n2^{3n} for the correlator product states versus 24n2^{4n} for the unrestricted matrix product state. However, much of this advantage is offset by a significant reduction in fidelity. Correlator product states break the local Hilbert space symmetry by the explicit selection of a local basis. We investigate this dependence in detail and formulate the broad principles under which correlator product states may be a useful tool. In particular, we find that scaling with overlap/bond order may be more stable with correlator product states allowing a more efficient extraction of critical exponents - we present an example in which the use of correlator product states is several orders of magnitude quicker than matrix product states.Comment: 19 pages, 14 figure
    corecore