25,201 research outputs found
A Framework for Demonstrating Practical Quantum Advantage: Racing Quantum against Classical Generative Models
Generative modeling has seen a rising interest in both classical and quantum
machine learning, and it represents a promising candidate to obtain a practical
quantum advantage in the near term. In this study, we build over a proposed
framework for evaluating the generalization performance of generative models,
and we establish the first quantitative comparative race towards practical
quantum advantage (PQA) between classical and quantum generative models, namely
Quantum Circuit Born Machines (QCBMs), Transformers (TFs), Recurrent Neural
Networks (RNNs), Variational Autoencoders (VAEs), and Wasserstein Generative
Adversarial Networks (WGANs). After defining four types of PQAs scenarios, we
focus on what we refer to as potential PQA, aiming to compare quantum models
with the best-known classical algorithms for the task at hand. We let the
models race on a well-defined and application-relevant competition setting,
where we illustrate and demonstrate our framework on 20 variables (qubits)
generative modeling task. Our results suggest that QCBMs are more efficient in
the data-limited regime than the other state-of-the-art classical generative
models. Such a feature is highly desirable in a wide range of real-world
applications where the available data is scarce.Comment: 17 pages, 5 figures, 3 table
Introducing Non-Linear Activations into Quantum Generative Models
Due to the linearity of quantum mechanics, it remains a challenge to design
quantum generative machine learning models that embed non-linear activations
into the evolution of the statevector. However, some of the most successful
classical generative models, such as those based on neural networks, involve
highly non-linear dynamics for quality training. In this paper, we explore the
effect of these dynamics in quantum generative modeling by introducing a model
that adds non-linear activations via a neural network structure onto the
standard Born Machine framework - the Quantum Neuron Born Machine (QNBM). To
achieve this, we utilize a previously introduced Quantum Neuron subroutine,
which is a repeat-until-success circuit with mid-circuit measurements and
classical control. After introducing the QNBM, we investigate how its
performance depends on network size, by training a 3-layer QNBM with 4 output
neurons and various input and hidden layer sizes. We then compare our
non-linear QNBM to the linear Quantum Circuit Born Machine (QCBM). We allocate
similar time and memory resources to each model, such that the only major
difference is the qubit overhead required by the QNBM. With gradient-based
training, we show that while both models can easily learn a trivial uniform
probability distribution, on a more challenging class of distributions, the
QNBM achieves an almost 3x smaller error rate than a QCBM with a similar number
of tunable parameters. We therefore provide evidence that suggests that
non-linearity is a useful resource in quantum generative models, and we put
forth the QNBM as a new model with good generative performance and potential
for quantum advantage
A performance characterization of quantum generative models
Quantum generative modeling is a growing area of interest for
industry-relevant applications. With the field still in its infancy, there are
many competing techniques. This work is an attempt to systematically compare a
broad range of these techniques to guide quantum computing practitioners when
deciding which models and techniques to use in their applications. We compare
fundamentally different architectural ansatzes of parametric quantum circuits
used for quantum generative modeling: 1. A continuous architecture, which
produces continuous-valued data samples, and 2. a discrete architecture, which
samples on a discrete grid. We compare the performance of different data
transformations: normalization by the min-max transform or by the probability
integral transform. We learn the underlying probability distribution of the
data sets via two popular training methods: 1. quantum circuit Born machines
(QCBM), and 2. quantum generative adversarial networks (QGAN). We study their
performance and trade-offs as the number of model parameters increases, with
the baseline of similarly trained classical neural networks. The study is
performed on six low-dimensional synthetic and two real financial data sets.
Our two key findings are that: 1. For all data sets, our quantum models require
similar or fewer parameters than their classical counterparts. In the extreme
case, the quantum models require two of orders of magnitude less parameters. 2.
We empirically find that a variant of the discrete architecture, which learns
the copula of the probability distribution, outperforms all other methods
Quantum Convolutional Neural Networks for Multi-Channel Supervised Learning
As the rapidly evolving field of machine learning continues to produce
incredibly useful tools and models, the potential for quantum computing to
provide speed up for machine learning algorithms is becoming increasingly
desirable. In particular, quantum circuits in place of classical convolutional
filters for image detection-based tasks are being investigated for the ability
to exploit quantum advantage. However, these attempts, referred to as quantum
convolutional neural networks (QCNNs), lack the ability to efficiently process
data with multiple channels and therefore are limited to relatively simple
inputs. In this work, we present a variety of hardware-adaptable quantum
circuit ansatzes for use as convolutional kernels, and demonstrate that the
quantum neural networks we report outperform existing QCNNs on classification
tasks involving multi-channel data. We envision that the ability of these
implementations to effectively learn inter-channel information will allow
quantum machine learning methods to operate with more complex data. This work
is available as open source at
https://github.com/anthonysmaldone/QCNN-Multi-Channel-Supervised-Learning
Quantum HyperNetworks: Training Binary Neural Networks in Quantum Superposition
Binary neural networks, i.e., neural networks whose parameters and
activations are constrained to only two possible values, offer a compelling
avenue for the deployment of deep learning models on energy- and memory-limited
devices. However, their training, architectural design, and hyperparameter
tuning remain challenging as these involve multiple computationally expensive
combinatorial optimization problems. Here we introduce quantum hypernetworks as
a mechanism to train binary neural networks on quantum computers, which unify
the search over parameters, hyperparameters, and architectures in a single
optimization loop. Through classical simulations, we demonstrate that of our
approach effectively finds optimal parameters, hyperparameters and
architectural choices with high probability on classification problems
including a two-dimensional Gaussian dataset and a scaled-down version of the
MNIST handwritten digits. We represent our quantum hypernetworks as variational
quantum circuits, and find that an optimal circuit depth maximizes the
probability of finding performant binary neural networks. Our unified approach
provides an immense scope for other applications in the field of machine
learning.Comment: 10 pages, 6 figures. Minimal implementation:
https://github.com/carrasqu/binncod
Reservoir Computing via Quantum Recurrent Neural Networks
Recent developments in quantum computing and machine learning have propelled
the interdisciplinary study of quantum machine learning. Sequential modeling is
an important task with high scientific and commercial value. Existing VQC or
QNN-based methods require significant computational resources to perform the
gradient-based optimization of a larger number of quantum circuit parameters.
The major drawback is that such quantum gradient calculation requires a large
amount of circuit evaluation, posing challenges in current near-term quantum
hardware and simulation software. In this work, we approach sequential modeling
by applying a reservoir computing (RC) framework to quantum recurrent neural
networks (QRNN-RC) that are based on classical RNN, LSTM and GRU. The main idea
to this RC approach is that the QRNN with randomly initialized weights is
treated as a dynamical system and only the final classical linear layer is
trained. Our numerical simulations show that the QRNN-RC can reach results
comparable to fully trained QRNN models for several function approximation and
time series prediction tasks. Since the QRNN training complexity is
significantly reduced, the proposed model trains notably faster. In this work
we also compare to corresponding classical RNN-based RC implementations and
show that the quantum version learns faster by requiring fewer training epochs
in most cases. Our results demonstrate a new possibility to utilize quantum
neural network for sequential modeling with greater quantum hardware
efficiency, an important design consideration for noisy intermediate-scale
quantum (NISQ) computers
Variational Quantum Neural Networks (VQNNS) in Image Classification
Quantum machine learning has established as an interdisciplinary field to
overcome limitations of classical machine learning and neural networks. This is
a field of research which can prove that quantum computers are able to solve
problems with complex correlations between inputs that can be hard for
classical computers. This suggests that learning models made on quantum
computers may be more powerful for applications, potentially faster computation
and better generalization on less data. The objective of this paper is to
investigate how training of quantum neural network (QNNs) can be done using
quantum optimization algorithms for improving the performance and time
complexity of QNNs. A classical neural network can be partially quantized to
create a hybrid quantum-classical neural network which is used mainly in
classification and image recognition. In this paper, a QNN structure is made
where a variational parameterized circuit is incorporated as an input layer
named as Variational Quantum Neural Network (VQNNs). We encode the cost
function of QNNs onto relative phases of a superposition state in the Hilbert
space of the network parameters. The parameters are tuned with an iterative
quantum approximate optimisation (QAOA) mixer and problem hamiltonians. VQNNs
is experimented with MNIST digit recognition (less complex) and crack image
classification datasets (more complex) which converges the computation in
lesser time than QNN with decent training accuracy
- …