2 research outputs found

    String Comparison on a Quantum Computer Using Hamming Distance

    Full text link
    The Hamming distance is ubiquitous in computing. Its computation gets expensive when one needs to compare a string against many strings. Quantum computers (QCs) may speed up the comparison. In this paper, we extend an existing algorithm for computing the Hamming distance. The extension can compare strings with symbols drawn from an arbitrary-long alphabet (which the original algorithm could not). We implement our extended algorithm using the QisKit framework to be executed by a programmer without the knowledge of a QC (the code is publicly available). We then provide four pedagogical examples: two from the field of bioinformatics and two from the field of software engineering. We finish by discussing resource requirements and the time horizon of the QCs becoming practical for string comparison

    EP-PQM: Efficient Parametric Probabilistic Quantum Memory with Fewer Qubits and Gates

    Full text link
    Machine learning (ML) classification tasks can be carried out on a quantum computer (QC) using Probabilistic Quantum Memory (PQM) and its extension, Parameteric PQM (P-PQM) by calculating the Hamming distance between an input pattern and a database of rr patterns containing zz features with aa distinct attributes. For accurate computations, the feature must be encoded using one-hot encoding, which is memory-intensive for multi-attribute datasets with a>2a>2. We can easily represent multi-attribute data more compactly on a classical computer by replacing one-hot encoding with label encoding. However, replacing these encoding schemes on a QC is not straightforward as PQM and P-PQM operate at the quantum bit level. We present an enhanced P-PQM, called EP-PQM, that allows label encoding of data stored in a PQM data structure and reduces the circuit depth of the data storage and retrieval procedures. We show implementations for an ideal QC and a noisy intermediate-scale quantum (NISQ) device. Our complexity analysis shows that the EP-PQM approach requires O(zlog⁑2(a))O\left(z \log_2(a)\right) qubits as opposed to O(za)O(za) qubits for P-PQM. EP-PQM also requires fewer gates, reducing gate count from O(rza)O\left(rza\right) to O(rzlog⁑2(a))O\left(rz\log_2(a)\right). For five datasets, we demonstrate that training an ML classification model using EP-PQM requires 48% to 77% fewer qubits than P-PQM for datasets with a>2a>2. EP-PQM reduces circuit depth in the range of 60% to 96%, depending on the dataset. The depth decreases further with a decomposed circuit, ranging between 94% and 99%. EP-PQM requires less space; thus, it can train on and classify larger datasets than previous PQM implementations on NISQ devices. Furthermore, reducing the number of gates speeds up the classification and reduces the noise associated with deep quantum circuits. Thus, EP-PQM brings us closer to scalable ML on a NISQ device.Comment: Clarification edit
    corecore