106 research outputs found
Quantum noise protects quantum classifiers against adversaries
Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies. However, noise has often played beneficial roles, from enhancing weak signals in stochastic resonance to protecting the privacy of data in differential privacy. It is then natural to ask: Can we harness the power of quantum noise that is beneficial to quantum computing? An important current direction for quantum computing is its application to machine learning, such as classification problems. One outstanding problem in machine learning for classification is its sensitivity to adversarial examples. These are small, undetectable perturbations from the original data where the perturbed data is completely misclassified in otherwise extremely accurate classifiers. They can also be considered as worst-case perturbations by unknown noise sources. We show that by taking advantage of depolarization noise in quantum circuits for classification, a robustness bound against adversaries can be derived where the robustness improves with increasing noise. This robustness property is intimately connected with an important security concept called differential privacy, which can be extended to quantum differential privacy. For the protection of quantum data, this quantum protocol can be used against the most general adversaries. Furthermore, we show how the robustness in the classical case can be sensitive to the details of the classification model, but in the quantum case the details of the classification model are absent, thus also providing a potential quantum advantage for classical data. This opens the opportunity to explore other ways in which quantum noise can be used in our favor, as well as identifying other ways quantum algorithms can be helpful in a way which is distinct from quantum speedups
Towards quantum enhanced adversarial robustness in machine learning
Machine learning algorithms are powerful tools for data driven tasks such as
image classification and feature detection, however their vulnerability to
adversarial examples - input samples manipulated to fool the algorithm -
remains a serious challenge. The integration of machine learning with quantum
computing has the potential to yield tools offering not only better accuracy
and computational efficiency, but also superior robustness against adversarial
attacks. Indeed, recent work has employed quantum mechanical phenomena to
defend against adversarial attacks, spurring the rapid development of the field
of quantum adversarial machine learning (QAML) and potentially yielding a new
source of quantum advantage. Despite promising early results, there remain
challenges towards building robust real-world QAML tools. In this review we
discuss recent progress in QAML and identify key challenges. We also suggest
future research directions which could determine the route to practicality for
QAML approaches as quantum computing hardware scales up and noise levels are
reduced.Comment: 10 Pages, 4 Figure
Benchmarking Adversarially Robust Quantum Machine Learning at Scale
Machine learning (ML) methods such as artificial neural networks are rapidly
becoming ubiquitous in modern science, technology and industry. Despite their
accuracy and sophistication, neural networks can be easily fooled by carefully
designed malicious inputs known as adversarial attacks. While such
vulnerabilities remain a serious challenge for classical neural networks, the
extent of their existence is not fully understood in the quantum ML setting. In
this work, we benchmark the robustness of quantum ML networks, such as quantum
variational classifiers (QVC), at scale by performing rigorous training for
both simple and complex image datasets and through a variety of high-end
adversarial attacks. Our results show that QVCs offer a notably enhanced
robustness against classical adversarial attacks by learning features which are
not detected by the classical neural networks, indicating a possible quantum
advantage for ML tasks. Contrarily, and remarkably, the converse is not true,
with attacks on quantum networks also capable of deceiving classical neural
networks. By combining quantum and classical network outcomes, we propose a
novel adversarial attack detection technology. Traditionally quantum advantage
in ML systems has been sought through increased accuracy or algorithmic
speed-up, but our work has revealed the potential for a new kind of quantum
advantage through superior robustness of ML models, whose practical realisation
will address serious security concerns and reliability issues of ML algorithms
employed in a myriad of applications including autonomous vehicles,
cybersecurity, and surveillance robotic systems.Comment: 10 pages, 5 Figure
Universal adversarial perturbations for multiple classification tasks with quantum classifiers
Quantum adversarial machine learning is an emerging field that studies the
vulnerability of quantum learning systems against adversarial perturbations and
develops possible defense strategies. Quantum universal adversarial
perturbations are small perturbations, which can make different input samples
into adversarial examples that may deceive a given quantum classifier. This is
a field that was rarely looked into but worthwhile investigating because
universal perturbations might simplify malicious attacks to a large extent,
causing unexpected devastation to quantum machine learning models. In this
paper, we take a step forward and explore the quantum universal perturbations
in the context of heterogeneous classification tasks. In particular, we find
that quantum classifiers that achieve almost state-of-the-art accuracy on two
different classification tasks can be both conclusively deceived by one
carefully-crafted universal perturbation. This result is explicitly
demonstrated with well-designed quantum continual learning models with elastic
weight consolidation method to avoid catastrophic forgetting, as well as
real-life heterogeneous datasets from hand-written digits and medical MRI
images. Our results provide a simple and efficient way to generate universal
perturbations on heterogeneous classification tasks and thus would provide
valuable guidance for future quantum learning technologies
Drastic Circuit Depth Reductions with Preserved Adversarial Robustness by Approximate Encoding for Quantum Machine Learning
Quantum machine learning (QML) is emerging as an application of quantum
computing with the potential to deliver quantum advantage, but its realisation
for practical applications remains impeded by challenges. Amongst those, a key
barrier is the computationally expensive task of encoding classical data into a
quantum state, which could erase any prospective speed-ups over classical
algorithms. In this work, we implement methods for the efficient preparation of
quantum states representing encoded image data using variational, genetic and
matrix product state based algorithms. Our results show that these methods can
approximately prepare states to a level suitable for QML using circuits two
orders of magnitude shallower than a standard state preparation implementation,
obtaining drastic savings in circuit depth and gate count without unduly
sacrificing classification accuracy. Additionally, the QML models trained and
evaluated on approximately encoded data display an increased robustness to
adversarially generated input data perturbations. This partial alleviation of
adversarial vulnerability, possible due to the "drowning out" of adversarial
perturbations while retaining the meaningful large-scale features of the data,
constitutes a considerable benefit for approximate state preparation in
addition to lessening the requirements of the quantum hardware. Our results,
based on simulations and experiments on IBM quantum devices, highlight a
promising pathway for the future implementation of accurate and robust QML
models on complex datasets relevant for practical applications, bringing the
possibility of NISQ-era QML advantage closer to reality.Comment: 14 pages, 8 figure
Quantum Fair Machine Learning
In this paper, we inaugurate the field of quantum fair machine learning. We
undertake a comparative analysis of differences and similarities between
classical and quantum fair machine learning algorithms, specifying how the
unique features of quantum computation alter measures, metrics and remediation
strategies when quantum algorithms are subject to fairness constraints. We
present the first results in quantum fair machine learning by demonstrating the
use of Grover's search algorithm to satisfy statistical parity constraints
imposed on quantum algorithms. We provide lower-bounds on iterations needed to
achieve such statistical parity within -tolerance. We extend
canonical Lipschitz-conditioned individual fairness criteria to the quantum
setting using quantum metrics. We examine the consequences for typical measures
of fairness in machine learning context when quantum information processing and
quantum data are involved. Finally, we propose open questions and research
programmes for this new field of interest to researchers in computer science,
ethics and quantum computation
A Privacy-Preserving Outsourced Data Model in Cloud Environment
Nowadays, more and more machine learning applications, such as medical
diagnosis, online fraud detection, email spam filtering, etc., services are
provided by cloud computing. The cloud service provider collects the data from
the various owners to train or classify the machine learning system in the
cloud environment. However, multiple data owners may not entirely rely on the
cloud platform that a third party engages. Therefore, data security and privacy
problems are among the critical hindrances to using machine learning tools,
particularly with multiple data owners. In addition, unauthorized entities can
detect the statistical input data and infer the machine learning model
parameters. Therefore, a privacy-preserving model is proposed, which protects
the privacy of the data without compromising machine learning efficiency. In
order to protect the data of data owners, the epsilon-differential privacy is
used, and fog nodes are used to address the problem of the lower bandwidth and
latency in this proposed scheme. The noise is produced by the
epsilon-differential mechanism, which is then added to the data. Moreover, the
noise is injected at the data owner site to protect the owners data. Fog nodes
collect the noise-added data from the data owners, then shift it to the cloud
platform for storage, computation, and performing the classification tasks
purposes
Verifying Fairness in Quantum Machine Learning
Due to the beyond-classical capability of quantum computing, quantum machine
learning is applied independently or embedded in classical models for decision
making, especially in the field of finance. Fairness and other ethical issues
are often one of the main concerns in decision making. In this work, we define
a formal framework for the fairness verification and analysis of quantum
machine learning decision models, where we adopt one of the most popular
notions of fairness in the literature based on the intuition -- any two similar
individuals must be treated similarly and are thus unbiased. We show that
quantum noise can improve fairness and develop an algorithm to check whether a
(noisy) quantum machine learning model is fair. In particular, this algorithm
can find bias kernels of quantum data (encoding individuals) during checking.
These bias kernels generate infinitely many bias pairs for investigating the
unfairness of the model. Our algorithm is designed based on a highly efficient
data structure -- Tensor Networks -- and implemented on Google's TensorFlow
Quantum. The utility and effectiveness of our algorithm are confirmed by the
experimental results, including income prediction and credit scoring on
real-world data, for a class of random (noisy) quantum decision models with 27
qubits (-dimensional state space) tripling ( times more than)
that of the state-of-the-art algorithms for verifying quantum machine learning
models
- …