874 research outputs found

    Expressibility-Enhancing Strategies for Quantum Neural Networks

    Full text link
    Quantum neural networks (QNNs), represented by parameterized quantum circuits, can be trained in the paradigm of supervised learning to map input data to predictions. Much work has focused on theoretically analyzing the expressive power of QNNs. However, in almost all literature, QNNs' expressive power is numerically validated using only simple univariate functions. We surprisingly discover that state-of-the-art QNNs with strong expressive power can have poor performance in approximating even just a simple sinusoidal function. To fill the gap, we propose four expressibility-enhancing strategies for QNNs: Sinusoidal-friendly embedding, redundant measurement, post-measurement function, and random training data. We analyze the effectiveness of these strategies via mathematical analysis and/or numerical studies including learning complex sinusoidal-based functions. Our results from comparative experiments validate that the four strategies can significantly increase the QNNs' performance in approximating complex multivariable functions and reduce the quantum circuit depth and qubits required.Comment: 16 pages, 11 figure

    Efficient Online Quantum Generative Adversarial Learning Algorithms with Applications

    Full text link
    The exploration of quantum algorithms that possess quantum advantages is a central topic in quantum computation and quantum information processing. One potential candidate in this area is quantum generative adversarial learning (QuGAL), which conceptually has exponential advantages over classical adversarial networks. However, the corresponding learning algorithm remains obscured. In this paper, we propose the first quantum generative adversarial learning algorithm-- the quantum multiplicative matrix weight algorithm (QMMW)-- which enables the efficient processing of fundamental tasks. The computational complexity of QMMW is polynomially proportional to the number of training rounds and logarithmically proportional to the input size. The core concept of the proposed algorithm combines QuGAL with online learning. We exploit the implementation of QuGAL with parameterized quantum circuits, and numerical experiments for the task of entanglement test for pure state are provided to support our claims

    The effect of the processing and measurement operators on the expressive power of quantum models

    Full text link
    There is an increasing interest in Quantum Machine Learning (QML) models, how they work and for which applications they could be useful. There have been many different proposals on how classical data can be encoded and what circuit ans\"atze and measurement operators should be used to process the encoded data and measure the output state of an ansatz. The choice of the aforementioned operators plays a determinant role in the expressive power of the QML model. In this work we investigate how certain changes in the circuit structure change this expressivity. We introduce both numerical and analytical tools to explore the effect that these operators have in the overall performance of the QML model. These tools are based on previous work on the teacher-student scheme, the partial Fourier series and the averaged operator size. We focus our analysis on simple QML models with two and three qubits and observe that increasing the number of parameterized and entangling gates leads to a more expressive model for certain circuit structures. Also, on which qubit the measurement is performed affects the type of functions that QML models could learn. This work sketches the determinant role that the processing and measurement operators have on the expressive power of simple quantum circuits

    Hierarchical quantum classifiers

    Get PDF
    Quantum circuits with hierarchical structure have been used to perform binary classification of classical data encoded in a quantum state. We demonstrate that more expressive circuits in the same family achieve better accuracy and can be used to classify highly entangled quantum states, for which there is no known efficient classical method. We compare performance for several different parameterizations on two classical machine learning datasets, Iris and MNIST, and on a synthetic dataset of quantum states. Finally, we demonstrate that performance is robust to noise and deploy an Iris dataset classifier on the ibmqx4 quantum computer

    Are Quantum Circuits Better than Neural Networks at Learning Multi-dimensional Discrete Data? An Investigation into Practical Quantum Circuit Generative Models

    Full text link
    Are multi-layer parameterized quantum circuits (MPQCs) more expressive than classical neural networks (NNs)? How, why, and in what aspects? In this work, we survey and develop intuitive insights into the expressive power of MPQCs in relation to classical NNs. We organize available sources into a systematic proof on why MPQCs are able to generate probability distributions that cannot be efficiently simulated classically. We first show that instantaneous quantum polynomial circuits (IQPCs), are unlikely to be simulated classically to within a multiplicative error, and then show that MPQCs efficiently generalize IQPCs. We support the surveyed claims with numerical simulations: with the MPQC as the core architecture, we build different versions of quantum generative models to learn a given multi-dimensional, multi-modal discrete data distribution, and show their superior performances over a classical Generative Adversarial Network (GAN) equipped with the Gumbel Softmax for generating discrete data. In addition, we address practical issues such as how to efficiently train a quantum circuit with only limited samples, how to efficiently calculate the (quantum) gradient, and how to alleviate modal collapse. We propose and experimentally verify an efficient training-and-fine-tuning scheme for lowering the output noise and decreasing modal collapse. As an original contribution, we develop a novel loss function (MCR loss) inspired by an information-theoretical measure -- the coding rate reduction metric, which has a more expressive and geometrically meaningful latent space representations -- beneficial for both model selection and alleviating modal collapse. We derive the gradient of our MCR loss with respect to the circuit parameters under two settings: with the radial basis function (RBF) kernel and with a NN discriminator and conduct experiments to showcase its effectiveness
    • …
    corecore