20 research outputs found

    Quantum Bootstrap Aggregation

    Get PDF
    We set out a strategy for quantizing attribute bootstrap aggregation to enable variance-resilient quantum machine learning. To do so, we utilise the linear decomposability of decision boundary parameters in the Rebentrost et al. Support Vector Machine to guarantee that stochastic measurement of the output quantum state will give rise to an ensemble decision without destroying the superposition over projective feature subsets induced within the chosen SVM implementation. We achieve a linear performance advantage, O(d), in addition to the existing O(log(n)) advantages of quantization as applied to Support Vector Machines. The approach extends to any form of quantum learning giving rise to linear decision boundaries

    Quantum Bootstrap Aggregation

    Get PDF
    We set out a strategy for quantizing attribute bootstrap aggregation to enable variance-resilient quantum machine learning. To do so, we utilise the linear decomposability of decision boundary parameters in the Rebentrost et al. Support Vector Machine to guarantee that stochastic measurement of the output quantum state will give rise to an ensemble decision without destroying the superposition over projective feature subsets induced within the chosen SVM implementation. We achieve a linear performance advantage, O(d), in addition to the existing O(log(n)) advantages of quantization as applied to Support Vector Machines. The approach extends to any form of quantum learning giving rise to linear decision boundaries

    Quantum error-correcting output codes

    Get PDF
    Quantum machine learning is the aspect of quantum computing concerned with the design of algorithms capable of generalized learning from labeled training data by effectively exploiting quantum effects. Error-correcting output codes (ECOC) are a standard setting in machine learning for efficiently rendering the collective outputs of a binary classifier, such as the support vector machine, as a multi-class decision procedure. Appropriate choice of error-correcting codes further enables incorrect individual classification decisions to be effectively corrected in the composite output. In this paper, we propose an appropriate quantization of the ECOC process, based on the quantum support vector machine. We will show that, in addition to the usual benefits of quantizing machine learning, this technique leads to an exponential reduction in the number of logic gates required for effective correction of classification error

    Quantum computing for pattern classification

    Full text link
    It is well known that for certain tasks, quantum computing outperforms classical computing. A growing number of contributions try to use this advantage in order to improve or extend classical machine learning algorithms by methods of quantum information theory. This paper gives a brief introduction into quantum machine learning using the example of pattern classification. We introduce a quantum pattern classification algorithm that draws on Trugenberger's proposal for measuring the Hamming distance on a quantum computer (CA Trugenberger, Phys Rev Let 87, 2001) and discuss its advantages using handwritten digit recognition as from the MNIST database.Comment: 14 pages, 3 figures, presented at the 13th Pacific Rim International Conference on Artificial Intelligenc

    Quantum K-nearest neighbor classification algorithm based on Hamming distance

    Full text link
    K-nearest neighbor classification algorithm is one of the most basic algorithms in machine learning, which determines the sample's category by the similarity between samples. In this paper, we propose a quantum K-nearest neighbor classification algorithm with Hamming distance. In this algorithm, quantum computation is firstly utilized to obtain Hamming distance in parallel. Then, a core sub-algorithm for searching the minimum of unordered integer sequence is presented to find out the minimum distance. Based on these two sub-algorithms, the whole quantum frame of K-nearest neighbor classification algorithm is presented. At last, it is shown that the proposed algorithm can achieve a quadratical speedup by analyzing its time complexity briefly.Comment: 8 pages,5 figure

    A quantum-inspired version of the nearest mean classifier

    Get PDF
    We introduce a framework suitable for describing standard classification problems using the mathematical language of quantum states. In particular, we provide a one-to-one correspondence between real objects and pure density operators. This correspondence enables us: (1) to represent the nearest mean classifier (NMC) in terms of quantum objects, (2) to introduce a quantum-inspired version of the NMC called quantum classifier (QC). By comparing the QC with the NMC on different datasets, we show how the first classifier is able to provide additional information that can be beneficial on a classical computer with respect to the second classifier

    Des-q: a quantum algorithm to construct and efficiently retrain decision trees for regression and binary classification

    Full text link
    Decision trees are widely used in machine learning due to their simplicity in construction and interpretability. However, as data sizes grow, traditional methods for constructing and retraining decision trees become increasingly slow, scaling polynomially with the number of training examples. In this work, we introduce a novel quantum algorithm, named Des-q, for constructing and retraining decision trees in regression and binary classification tasks. Assuming the data stream produces small increments of new training examples, we demonstrate that our Des-q algorithm significantly reduces the time required for tree retraining, achieving a poly-logarithmic time complexity in the number of training examples, even accounting for the time needed to load the new examples into quantum-accessible memory. Our approach involves building a decision tree algorithm to perform k-piecewise linear tree splits at each internal node. These splits simultaneously generate multiple hyperplanes, dividing the feature space into k distinct regions. To determine the k suitable anchor points for these splits, we develop an efficient quantum-supervised clustering method, building upon the q-means algorithm of Kerenidis et al. Des-q first efficiently estimates each feature weight using a novel quantum technique to estimate the Pearson correlation. Subsequently, we employ weighted distance estimation to cluster the training examples in k disjoint regions and then proceed to expand the tree using the same procedure. We benchmark the performance of the simulated version of our algorithm against the state-of-the-art classical decision tree for regression and binary classification on multiple data sets with numerical features. Further, we showcase that the proposed algorithm exhibits similar performance to the state-of-the-art decision tree while significantly speeding up the periodic tree retraining.Comment: 48 pager, 4 figures, 4 table
    corecore