2 research outputs found
String Comparison on a Quantum Computer Using Hamming Distance
The Hamming distance is ubiquitous in computing. Its computation gets
expensive when one needs to compare a string against many strings. Quantum
computers (QCs) may speed up the comparison.
In this paper, we extend an existing algorithm for computing the Hamming
distance. The extension can compare strings with symbols drawn from an
arbitrary-long alphabet (which the original algorithm could not). We implement
our extended algorithm using the QisKit framework to be executed by a
programmer without the knowledge of a QC (the code is publicly available). We
then provide four pedagogical examples: two from the field of bioinformatics
and two from the field of software engineering. We finish by discussing
resource requirements and the time horizon of the QCs becoming practical for
string comparison
EP-PQM: Efficient Parametric Probabilistic Quantum Memory with Fewer Qubits and Gates
Machine learning (ML) classification tasks can be carried out on a quantum
computer (QC) using Probabilistic Quantum Memory (PQM) and its extension,
Parameteric PQM (P-PQM) by calculating the Hamming distance between an input
pattern and a database of patterns containing features with
distinct attributes.
For accurate computations, the feature must be encoded using one-hot
encoding, which is memory-intensive for multi-attribute datasets with . We
can easily represent multi-attribute data more compactly on a classical
computer by replacing one-hot encoding with label encoding. However, replacing
these encoding schemes on a QC is not straightforward as PQM and P-PQM operate
at the quantum bit level.
We present an enhanced P-PQM, called EP-PQM, that allows label encoding of
data stored in a PQM data structure and reduces the circuit depth of the data
storage and retrieval procedures. We show implementations for an ideal QC and a
noisy intermediate-scale quantum (NISQ) device.
Our complexity analysis shows that the EP-PQM approach requires qubits as opposed to qubits for P-PQM. EP-PQM also
requires fewer gates, reducing gate count from to
.
For five datasets, we demonstrate that training an ML classification model
using EP-PQM requires 48% to 77% fewer qubits than P-PQM for datasets with
. EP-PQM reduces circuit depth in the range of 60% to 96%, depending on
the dataset. The depth decreases further with a decomposed circuit, ranging
between 94% and 99%.
EP-PQM requires less space; thus, it can train on and classify larger
datasets than previous PQM implementations on NISQ devices. Furthermore,
reducing the number of gates speeds up the classification and reduces the noise
associated with deep quantum circuits. Thus, EP-PQM brings us closer to
scalable ML on a NISQ device.Comment: Clarification edit