12 research outputs found
Towards quantum enhanced adversarial robustness in machine learning
Machine learning algorithms are powerful tools for data driven tasks such as
image classification and feature detection, however their vulnerability to
adversarial examples - input samples manipulated to fool the algorithm -
remains a serious challenge. The integration of machine learning with quantum
computing has the potential to yield tools offering not only better accuracy
and computational efficiency, but also superior robustness against adversarial
attacks. Indeed, recent work has employed quantum mechanical phenomena to
defend against adversarial attacks, spurring the rapid development of the field
of quantum adversarial machine learning (QAML) and potentially yielding a new
source of quantum advantage. Despite promising early results, there remain
challenges towards building robust real-world QAML tools. In this review we
discuss recent progress in QAML and identify key challenges. We also suggest
future research directions which could determine the route to practicality for
QAML approaches as quantum computing hardware scales up and noise levels are
reduced.Comment: 10 Pages, 4 Figure
Universal adversarial perturbations for multiple classification tasks with quantum classifiers
Quantum adversarial machine learning is an emerging field that studies the
vulnerability of quantum learning systems against adversarial perturbations and
develops possible defense strategies. Quantum universal adversarial
perturbations are small perturbations, which can make different input samples
into adversarial examples that may deceive a given quantum classifier. This is
a field that was rarely looked into but worthwhile investigating because
universal perturbations might simplify malicious attacks to a large extent,
causing unexpected devastation to quantum machine learning models. In this
paper, we take a step forward and explore the quantum universal perturbations
in the context of heterogeneous classification tasks. In particular, we find
that quantum classifiers that achieve almost state-of-the-art accuracy on two
different classification tasks can be both conclusively deceived by one
carefully-crafted universal perturbation. This result is explicitly
demonstrated with well-designed quantum continual learning models with elastic
weight consolidation method to avoid catastrophic forgetting, as well as
real-life heterogeneous datasets from hand-written digits and medical MRI
images. Our results provide a simple and efficient way to generate universal
perturbations on heterogeneous classification tasks and thus would provide
valuable guidance for future quantum learning technologies
Unveiling Advanced Computational Applications in Quantum Computing: A Comprehensive Review
The field of advanced computing applications could experience a significant impact from quantum computing, which is a rapidly developing field with the potential to revolutionize numerous areas of science and technology. In this review, we explore into the various ways in which complex computational problems could be tackled by utilizing quantum computers, including machine learning, optimization, and simulation. One potential application of quantum computers is in machine learning, where they could be used to improve the accuracy and efficiency of algorithms. Complex optimization problems, such as those encountered in logistics and finance, can be addressed using quantum computers as well. Furthermore, the utilization of quantum computers could enable the simulation of intricate systems, such as molecules and materials, leading to significant applications in fields like Physics and Material Technology. Although quantum computers are currently in the early stages of development, they possess the potential to propel numerous areas of science and technology forward in a significant manner. Further research and development are needed to fully realize the potential of quantum computing in the field of advanced computing applications
Hybrid quantum-classical unsupervised data clustering based on the Self-Organizing Feature Map
Unsupervised machine learning is one of the main techniques employed in
artificial intelligence. Quantum computers offer opportunities to speed up such
machine learning techniques. Here, we introduce an algorithm for quantum
assisted unsupervised data clustering using the self-organizing feature map, a
type of artificial neural network. We make a proof-of-concept realization of
one of the central components on the IBM Q Experience and show that it allows
us to reduce the number of calculations in a number of clusters. We compare the
results with the classical algorithm on a toy example of unsupervised text
clustering
Experimental quantum adversarial learning with programmable superconducting qubits
Quantum computing promises to enhance machine learning and artificial
intelligence. Different quantum algorithms have been proposed to improve a wide
spectrum of machine learning tasks. Yet, recent theoretical works show that,
similar to traditional classifiers based on deep classical neural networks,
quantum classifiers would suffer from the vulnerability problem: adding tiny
carefully-crafted perturbations to the legitimate original data samples would
facilitate incorrect predictions at a notably high confidence level. This will
pose serious problems for future quantum machine learning applications in
safety and security-critical scenarios. Here, we report the first experimental
demonstration of quantum adversarial learning with programmable superconducting
qubits. We train quantum classifiers, which are built upon variational quantum
circuits consisting of ten transmon qubits featuring average lifetimes of 150
s, and average fidelities of simultaneous single- and two-qubit gates
above 99.94% and 99.4% respectively, with both real-life images (e.g., medical
magnetic resonance imaging scans) and quantum data. We demonstrate that these
well-trained classifiers (with testing accuracy up to 99%) can be practically
deceived by small adversarial perturbations, whereas an adversarial training
process would significantly enhance their robustness to such perturbations. Our
results reveal experimentally a crucial vulnerability aspect of quantum
learning systems under adversarial scenarios and demonstrate an effective
defense strategy against adversarial attacks, which provide a valuable guide
for quantum artificial intelligence applications with both near-term and future
quantum devices.Comment: 26 pages, 17 figures, 8 algorithm
Training robust and generalizable quantum models
Adversarial robustness and generalization are both crucial properties of
reliable machine learning models. In this paper, we study these properties in
the context of quantum machine learning based on Lipschitz bounds. We derive
tailored, parameter-dependent Lipschitz bounds for quantum models with
trainable encoding, showing that the norm of the data encoding has a crucial
impact on the robustness against perturbations in the input data. Further, we
derive a bound on the generalization error which explicitly depends on the
parameters of the data encoding. Our theoretical findings give rise to a
practical strategy for training robust and generalizable quantum models by
regularizing the Lipschitz bound in the cost. Further, we show that, for fixed
and non-trainable encodings as frequently employed in quantum machine learning,
the Lipschitz bound cannot be influenced by tuning the parameters. Thus,
trainable encodings are crucial for systematically adapting robustness and
generalization during training. With numerical results, we demonstrate that,
indeed, Lipschitz bound regularization leads to substantially more robust and
generalizable quantum models
A review of spam email detection: analysis of spammer strategies and the dataset shift problem
.Spam emails have been traditionally seen as just annoying and unsolicited emails containing advertisements, but they increasingly include scams, malware or phishing. In order to ensure the security and integrity for the users, organisations and researchers aim to develop robust filters for spam email detection. Recently, most spam filters based on machine learning algorithms published in academic journals report very high performance, but users are still reporting a rising number of frauds and attacks via spam emails. Two main challenges can be found in this field: (a) it is a very dynamic environment prone to the dataset shift problem and (b) it suffers from the presence of an adversarial figure, i.e. the spammer. Unlike classical spam email reviews, this one is particularly focused on the problems that this constantly changing environment poses. Moreover, we analyse the different spammer strategies used for contaminating the emails, and we review the state-of-the-art techniques to develop filters based on machine learning. Finally, we empirically evaluate and present the consequences of ignoring the matter of dataset shift in this practical field. Experimental results show that this shift may lead to severe degradation in the estimated generalisation performance, with error rates reaching values up to 48.81%.SIPublicación en abierto financiada por el Consorcio de Bibliotecas Universitarias de Castilla y León (BUCLE), con cargo al Programa Operativo 2014ES16RFOP009 FEDER 2014-2020 DE CASTILLA Y LEÓN, Actuación:20007-CL - Apoyo Consorcio BUCL