16 research outputs found
Structure Learning of Quantum Embeddings
The representation of data is of paramount importance for machine learning
methods. Kernel methods are used to enrich the feature representation, allowing
better generalization. Quantum kernels implement efficiently complex
transformation encoding classical data in the Hilbert space of a quantum
system, resulting in even exponential speedup. However, we need prior knowledge
of the data to choose an appropriate parametric quantum circuit that can be
used as quantum embedding. We propose an algorithm that automatically selects
the best quantum embedding through a combinatorial optimization procedure that
modifies the structure of the circuit, changing the generators of the gates,
their angles (which depend on the data points), and the qubits on which the
various gates act. Since combinatorial optimization is computationally
expensive, we have introduced a criterion based on the exponential
concentration of kernel matrix coefficients around the mean to immediately
discard an arbitrarily large portion of solutions that are believed to perform
poorly. Contrary to the gradient-based optimization (e.g. trainable quantum
kernels), our approach is not affected by the barren plateau by construction.
We have used both artificial and real-world datasets to demonstrate the
increased performance of our approach with respect to randomly generated PQC.
We have also compared the effect of different optimization algorithms,
including greedy local search, simulated annealing, and genetic algorithms,
showing that the algorithm choice largely affects the result
Amplitude-assisted tagging of longitudinally polarised bosons using wide neural networks
Extracting longitudinal modes of weak bosons in LHC processes is essential to
understand the electroweak-symmetry-breaking mechanism. To that end, we propose
a general method, based on wide neural networks, to properly model
longitudinal-boson signals and hence enable the event-by-event tagging of
longitudinal bosons. It combines experimentally accessible kinematic
information and genuine theoretical inputs provided by amplitudes in
perturbation theory. As an application we consider the production of a Z boson
in association with a jet at the LHC, both at leading order and in the presence
of parton-shower effects. The devised neural networks are able to extract
reliably the longitudinal contribution to the unpolarised process. The proposed
method is very general and can be systematically extended to other processes
and problems.Comment: 29 pages, 10 figures, 4 table
Quantum Advantage Seeker with Kernels (QuASK): a software framework to speed up the research in quantum machine learning
Exploiting the properties of quantum information to the benefit of machine
learning models is perhaps the most active field of research in quantum
computation. This interest has supported the development of a multitude of
software frameworks (e.g. Qiskit, Pennylane, Braket) to implement, simulate,
and execute quantum algorithms. Most of them allow us to define quantum
circuits, run basic quantum algorithms, and access low-level primitives
depending on the hardware such software is supposed to run. For most
experiments, these frameworks have to be manually integrated within a larger
machine learning software pipeline. The researcher is in charge of knowing
different software packages, integrating them through the development of long
code scripts, analyzing the results, and generating the plots. Long code often
leads to erroneous applications, due to the average number of bugs growing
proportional with respect to the program length. Moreover, other researchers
will struggle to understand and reproduce the experiment, due to the need to be
familiar with all the different software frameworks involved in the code
script. We propose QuASK, an open-source quantum machine learning framework
written in Python that aids the researcher in performing their experiments,
with particular attention to quantum kernel techniques. QuASK can be used as a
command-line tool to download datasets, pre-process them, quantum machine
learning routines, analyze and visualize the results. QuASK implements most
state-of-the-art algorithms to analyze the data through quantum kernels, with
the possibility to use projected kernels, (gradient-descent) trainable quantum
kernels, and structure-optimized quantum kernels. Our framework can also be
used as a library and integrated into pre-existing software, maximizing code
reuse.Comment: Close to the published versio
Resource Saving via Ensemble Techniques for Quantum Neural Networks
Quantum neural networks hold significant promise for numerous applications,
particularly as they can be executed on the current generation of quantum
hardware. However, due to limited qubits or hardware noise, conducting
large-scale experiments often requires significant resources. Moreover, the
output of the model is susceptible to corruption by quantum hardware noise. To
address this issue, we propose the use of ensemble techniques, which involve
constructing a single machine learning model based on multiple instances of
quantum neural networks. In particular, we implement bagging and AdaBoost
techniques, with different data loading configurations, and evaluate their
performance on both synthetic and real-world classification and regression
tasks. To assess the potential performance improvement under different
environments, we conduct experiments on both simulated, noiseless software and
IBM superconducting-based QPUs, suggesting these techniques can mitigate the
quantum hardware noise. Additionally, we quantify the amount of resources saved
using these ensemble techniques. Our findings indicate that these methods
enable the construction of large, powerful models even on relatively small
quantum devices.Comment: Extended paper of the work presented at QTML 2022. Close to published
versio
On the construction of useful quantum kernels
The representation of data is of paramount importance for machine learning methods. Kernel methods are used to enrich the feature representation, allowing better generalization. Quantum kernels implement efficiently complex transformation encoding classical data in the Hilbert space of a quantum system, resulting in even exponential speedup. However, we need prior knowledge of the data to choose an appropriate parametric quantum circuit that can be used as quantum embedding.We propose an algorithm that automatically selects the best quantum embedding through a combinatorial optimization procedure that modifies the structure of the circuit, changing the generators of the gates, their angles (which depend on the data points), and the qubits on which the various gates act. Since combinatorial optimization is computationally expensive, we have introduced a criterion based on the exponential concentration of kernel matrix coefficients around the mean to immediately discard an arbitrarily large portion of solutions that are believed to perform poorly.Contrary to the gradient-based optimization (e.g. trainable quantum kernels), our approach is not affected by the barren plateau by construction. We have used both artificial and real-world datasets to demonstrate the increased performance of our approach with respect to randomly generated PQC. We have also compared the effect of different optimization algorithms, including greedy local search, simulated annealing, and genetic algorithms, showing that the algorithm choice largely affects the result.About the speaker Massimiliano Incudini is PhD student at the Dep. of Computer Science, University of Verona. In his studies, he is focused on the development of quantum kernels for real-world applications.Collaborators The presentation is based on the joint work with Alessandra Di Pierro and Francesco Martini available at arxiv.org/abs/2209.11144.</p
Quantum Machine Learning and Fraud Detection
One of the most common problems in cybersecurity is related to the fraudulent activities that are performed in various settings and predominantly through the Internet. Securing online card transactions is a tough nut to crack for the banking sector, for which fraud detection is an essential measure. Fraud detection problems involve huge data sets and require fast and efficient algorithms. In this paper, we report on the use of a quantum machine learning algorithm for dealing with this problem and present the results of experimenting on a case study. By enhancing statistical models with the computational power of quantum computing, quantum machine learning promises great advantages for cybersecurity
Facial expression recognition on a quantum computer
We address the problem of facial expression recognition and show a possible solution using a quantum machine learning approach. In order to define an efficient classifier for a given dataset, our approach substantially exploits quantum interference. By representing face expressions via graphs, we define a classifier as a quantum circuit that manipulates the graphs adjacency matrices encoded into the amplitudes of some appropriately defined quantum states. We discuss the accuracy of the quantum classifier evaluated on the quantum simulator available on the IBM Quantum Experience cloud platform, and compare it with the accuracy of one of the best classical classifier
Toward Useful Quantum Kernels
Supervised machine learning is a popular approach to the solution of many real-life problems. This approach is characterized by the use of labeled datasets to train algorithms for classifying data or predicting outcomes accurately. The question of the extent to which quantum computation can help improve existing classical supervised learning methods is the subject of intense research in the area of quantum machine learning. The debate centers on whether an advantage can be achieved already with current noisy quantum computer prototypes or it is strictly dependent on the full power of a fault-tolerant quantum computer. The current proposals can be classified into methods that can be suitably implemented on near-term quantum computers but are essentially empirical, and methods that use quantum algorithms with a provable advantage over their classical counterparts but only when implemented on the still unavailable fault-tolerant quantum computer. It turns out that, for the latter class, the benefit offered by quantum computation can be shown rigorously using quantum kernels, whereas the approach based on near-term quantum computers is very unlikely to bring any advantage if implemented in the form of hybrid algorithms that delegate the hard part (optimization) to the far more powerful classical computer
Higher-order topological kernels via quantum computation
Topological data analysis (TDA) has emerged as a powerful tool for extracting
meaningful insights from complex data. TDA enhances the analysis of objects by
embedding them into a simplicial complex and extracting useful global
properties such as the Betti numbers, i.e. the number of multidimensional
holes, which can be used to define kernel methods that are easily integrated
with existing machine-learning algorithms. These kernel methods have found
broad applications, as they rely on powerful mathematical frameworks which
provide theoretical guarantees on their performance. However, the computation
of higher-dimensional Betti numbers can be prohibitively expensive on classical
hardware, while quantum algorithms can approximate them in polynomial time in
the instance size. In this work, we propose a quantum approach to defining
topological kernels, which is based on constructing Betti curves, i.e.
topological fingerprint of filtrations with increasing order. We exhibit a
working prototype of our approach implemented on a noiseless simulator and show
its robustness by means of some empirical results suggesting that topological
approaches may offer an advantage in quantum machine learning.Comment: To appear in the Proceeding of the 2023 IEEE International Conference
on Quantum Computing and Engineering (QCE
Facial Expression Recognition on a Quantum Computer
We address the problem of facial expression recognition and show a possible
solution using a quantum machine learning approach. In order to define an
efficient classifier for a given dataset, our approach substantially exploits
quantum interference. By representing face expressions via graphs, we define a
classifier as a quantum circuit that manipulates the graphs adjacency matrices
encoded into the amplitudes of some appropriately defined quantum states. We
discuss the accuracy of the quantum classifier evaluated on the quantum
simulator available on the IBM Quantum Experience cloud platform, and compare
it with the accuracy of one of the best classical classifier