6 research outputs found
Coreset Clustering on Small Quantum Computers
Many quantum algorithms for machine learning require access to classical data
in superposition. However, for many natural data sets and algorithms, the
overhead required to load the data set in superposition can erase any potential
quantum speedup over classical algorithms. Recent work by Harrow introduces a
new paradigm in hybrid quantum-classical computing to address this issue,
relying on coresets to minimize the data loading overhead of quantum
algorithms. We investigate using this paradigm to perform -means clustering
on near-term quantum computers, by casting it as a QAOA optimization instance
over a small coreset. We compare the performance of this approach to classical
-means clustering both numerically and experimentally on IBM Q hardware. We
are able to find data sets where coresets work well relative to random sampling
and where QAOA could potentially outperform standard -means on a coreset.
However, finding data sets where both coresets and QAOA work well--which is
necessary for a quantum advantage over -means on the entire data
set--appears to be challenging
Assembly of a Coreset of Earth Observation Images on a Small Quantum Computer
Satellite instruments monitor the Earth's surface day and night, and, as a result, the size of Earth observation (EO) data is dramatically increasing. Machine Learning (ML) techniques are employed routinely to analyze and process these big EO data, and one well-known ML technique is a Support Vector Machine (SVM). An SVM poses a quadratic programming problem, and quantum computers including quantum annealers (QA) as well as gate-based quantum computers promise to solve an SVM more efficiently than a conventional computer; training the SVM by employing a quantum computer/conventional computer represents a quantum SVM (qSVM)/classical SVM (cSVM) application. However, quantum computers cannot tackle many practical EO problems by using a qSVM due to their very low number of input qubits. Hence, we assembled a coreset (core of a dataset) of given EO data for training a weighted SVM on a small quantum computer, a D-Wave quantum annealer with around 5000 input quantum bits. The coreset is a small, representative weighted subset of an original dataset, and its performance can be analyzed by using the proposed weighted SVM on a small quantum computer in contrast to the original dataset. As practical data, we use synthetic data, Iris data, a Hyperspectral Image (HSI) of Indian Pine, and a Polarimetric Synthetic Aperture Radar (PolSAR) image of San Francisco. We measured the closeness between an original dataset and its coreset by employing a Kullback–Leibler (KL) divergence test, and, in addition, we trained a weighted SVM on our coreset data by using both a D-Wave quantum annealer (D-Wave QA) and a conventional computer. Our findings show that the coreset approximates the original dataset with very small KL divergence (smaller is better), and the weighted qSVM even outperforms the weighted cSVM on the coresets for a few instances of our experiments. As a side result (or a by-product result), we also present our KL divergence findings for demonstrating the closeness between our original data (i.e., our synthetic data, Iris data, hyperspectral image, and PolSAR image) and the assembled coreset
Superstaq: Deep Optimization of Quantum Programs
We describe Superstaq, a quantum software platform that optimizes the
execution of quantum programs by tailoring to underlying hardware primitives.
For benchmarks such as the Bernstein-Vazirani algorithm and the Qubit Coupled
Cluster chemistry method, we find that deep optimization can improve program
execution performance by at least 10x compared to prevailing state-of-the-art
compilers. To highlight the versatility of our approach, we present results
from several hardware platforms: superconducting qubits (AQT @ LBNL, IBM
Quantum, Rigetti), trapped ions (QSCOUT), and neutral atoms (Infleqtion).
Across all platforms, we demonstrate new levels of performance and new
capabilities that are enabled by deeper integration between quantum programs
and the device physics of hardware.Comment: Appearing in IEEE QCE 2023 (Quantum Week) conferenc
Quantum-centric Supercomputing for Materials Science: A Perspective on Challenges and Future Directions
Computational models are an essential tool for the design, characterization,
and discovery of novel materials. Hard computational tasks in materials science
stretch the limits of existing high-performance supercomputing centers,
consuming much of their simulation, analysis, and data resources. Quantum
computing, on the other hand, is an emerging technology with the potential to
accelerate many of the computational tasks needed for materials science. In
order to do that, the quantum technology must interact with conventional
high-performance computing in several ways: approximate results validation,
identification of hard problems, and synergies in quantum-centric
supercomputing. In this paper, we provide a perspective on how quantum-centric
supercomputing can help address critical computational problems in materials
science, the challenges to face in order to solve representative use cases, and
new suggested directions.Comment: 60 pages, 14 figures; comments welcom
Coreset Clustering on Small Quantum Computers
Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup over classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigated using this paradigm to perform k-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We used numerical simulations to compare the performance of this approach to classical k-means clustering. We were able to find data sets with which coresets work well relative to random sampling and where QAOA could potentially outperform standard k-means on a coreset. However, finding data sets where both coresets and QAOA work well—which is necessary for a quantum advantage over k-means on the entire data set—appears to be challenging.National Science Foundation (U.S.). Expedition in Computing (Grants CCF-1730082/1730449)United States. Department of Energy (Grants DE- SC0020289 and DE-SC0020331)National Science Foundation (U.S.). (Grants OMA-2016136 and the Q-NEXT DOE NQI Center)National Science Foundation (U.S.) (Grants Phy-1818914, 2110860)National Science Foundation (U.S.). Graduate Research Fellowship Program (Grant number 4000063445)Lester Wolfe FellowshipHenry W. Kendall Fellowship Fun
Coreset Clustering on Small Quantum Computers
Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup over classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigated using this paradigm to perform k-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We used numerical simulations to compare the performance of this approach to classical k-means clustering. We were able to find data sets with which coresets work well relative to random sampling and where QAOA could potentially outperform standard k-means on a coreset. However, finding data sets where both coresets and QAOA work well—which is necessary for a quantum advantage over k-means on the entire data set—appears to be challenging