328 research outputs found

    Decoding Small Surface Codes with Feedforward Neural Networks

    Full text link
    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware

    Advances in quantum machine learning

    Get PDF
    Here we discuss advances in the field of quantum machine learning. The following document offers a hybrid discussion; both reviewing the field as it is currently, and suggesting directions for further research. We include both algorithms and experimental implementations in the discussion. The field's outlook is generally positive, showing significant promise. However, we believe there are appreciable hurdles to overcome before one can claim that it is a primary application of quantum computation.Comment: 38 pages, 17 Figure

    Supervised learning with a quantum classifier using a multi-level system

    Full text link
    We propose a quantum classifier, which can classify data under the supervised learning scheme using a quantum feature space. The input feature vectors are encoded in a single quNNit (a NN level quantum system), as opposed to more commonly used entangled multi-qubit systems. For training we use the much used quantum variational algorithm -- a hybrid quantum-classical algorithm -- in which the forward part of the computation is performed on a quantum hardware whereas the feedback part is carried out on a classical computer. We introduce "single shot training" in our scheme, with all input samples belonging to the same class being used to train the classifier simultaneously. This significantly speeds up the training procedure and provides an advantage over classical machine learning classifiers. We demonstrate successful classification of popular benchmark datasets with our quantum classifier and compare its performance with respect to some classical machine learning classifiers. We also show that the number of training parameters in our classifier is significantly less than the classical classifiers.Comment: Preliminary version, Comments are welcom

    Self-Supervised Graph Transformer on Large-Scale Molecular Data

    Full text link
    How to obtain informative representations of molecules is a crucial prerequisite in AI-driven drug design and discovery. Recent researches abstract molecules as graphs and employ Graph Neural Networks (GNNs) for molecular representation learning. Nevertheless, two issues impede the usage of GNNs in real scenarios: (1) insufficient labeled molecules for supervised training; (2) poor generalization capability to new-synthesized molecules. To address them both, we propose a novel framework, GROVER, which stands for Graph Representation frOm self-superVised mEssage passing tRansformer. With carefully designed self-supervised tasks in node-, edge- and graph-level, GROVER can learn rich structural and semantic information of molecules from enormous unlabelled molecular data. Rather, to encode such complex information, GROVER integrates Message Passing Networks into the Transformer-style architecture to deliver a class of more expressive encoders of molecules. The flexibility of GROVER allows it to be trained efficiently on large-scale molecular dataset without requiring any supervision, thus being immunized to the two issues mentioned above. We pre-train GROVER with 100 million parameters on 10 million unlabelled molecules -- the biggest GNN and the largest training dataset in molecular representation learning. We then leverage the pre-trained GROVER for molecular property prediction followed by task-specific fine-tuning, where we observe a huge improvement (more than 6% on average) from current state-of-the-art methods on 11 challenging benchmarks. The insights we gained are that well-designed self-supervision losses and largely-expressive pre-trained models enjoy the significant potential on performance boosting.Comment: 17 pages, 7 figure

    Neural ensemble decoding for topological quantum error-correcting codes

    Full text link
    Topological quantum error-correcting codes are a promising candidate for building fault-tolerant quantum computers. Decoding topological codes optimally, however, is known to be a computationally hard problem. Various decoders have been proposed that achieve approximately optimal error thresholds. Due to practical constraints, it is not known if there exists an obvious choice for a decoder. In this paper, we introduce a framework which can combine arbitrary decoders for any given code to significantly reduce the logical error rates. We rely on the crucial observation that two different decoding techniques, while possibly having similar logical error rates, can perform differently on the same error syndrome. We use machine learning techniques to assign a given error syndrome to the decoder which is likely to decode it correctly. We apply our framework to an ensemble of Minimum-Weight Perfect Matching (MWPM) and Hard-Decision Re-normalization Group (HDRG) decoders for the surface code in the depolarizing noise model. Our simulations show an improvement of 38.4%, 14.6%, and 7.1% over the pseudo-threshold of MWPM in the instance of distance 5, 7, and 9 codes, respectively. Lastly, we discuss the advantages and limitations of our framework and applicability to other error-correcting codes. Our framework can provide a significant boost to error correction by combining the strengths of various decoders. In particular, it may allow for combining very fast decoders with moderate error-correcting capability to create a very fast ensemble decoder with high error-correcting capability.Comment: Replaced with the published version, comments welcome

    Dynamic Portfolio Selection to Counter Terrorism by using Quantum Neural Network Approach

    Get PDF
    Not only Pakistan but the whole world is facing the problems of prevailing terrorist activities and attacks in many forms. Terrorism has diverse aspects and to eradicate this growing problem a hybrid model of quantum and classical neurons is suggested for the prediction of the risk involved and returns of investments in recommended areas to minimize terrorism. These areas are recommended on the basis of the findings of Crime analysts and professionals from other related domains after a deep analysis of the situation of the country and terrorist activities. The identification of the areas which causes terrorism is a core step towards counter the terrorism. Hopfield neural network is used to predict best possible portfolio from available resources. The recommended multilayer hybrid Quantum Neural Network holds hidden layer of quantum neurons while the visible layer is of classical neurons. With the help of QNN an appropriate portfolio can be selected whose risk factor will be minimum and the output generated from investments in identified areas will be maximum.  Keywords:Quantum neural network, Portfolio selection, Resource allocation, Quantum back propagation, Quantum computation

    Quantum perceptron over a field and neural network architecture selection in a quantum computer

    Full text link
    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator

    Learn molecular representations from large-scale unlabeled molecules for drug discovery

    Full text link
    How to produce expressive molecular representations is a fundamental challenge in AI-driven drug discovery. Graph neural network (GNN) has emerged as a powerful technique for modeling molecular data. However, previous supervised approaches usually suffer from the scarcity of labeled data and have poor generalization capability. Here, we proposed a novel Molecular Pre-training Graph-based deep learning framework, named MPG, that leans molecular representations from large-scale unlabeled molecules. In MPG, we proposed a powerful MolGNet model and an effective self-supervised strategy for pre-training the model at both the node and graph-level. After pre-training on 11 million unlabeled molecules, we revealed that MolGNet can capture valuable chemistry insights to produce interpretable representation. The pre-trained MolGNet can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of drug discovery tasks, including molecular properties prediction, drug-drug interaction, and drug-target interaction, involving 13 benchmark datasets. Our work demonstrates that MPG is promising to become a novel approach in the drug discovery pipeline

    Quantum phase recognition via unsupervised machine learning

    Full text link
    The application of state-of-the-art machine learning techniques to statistical physic problems has seen a surge of interest for their ability to discriminate phases of matter by extracting essential features in the many-body wavefunction or the ensemble of correlators sampled in Monte Carlo simulations. Here we introduce a gener- alization of supervised machine learning approaches that allows to accurately map out phase diagrams of inter- acting many-body systems without any prior knowledge, e.g. of their general topology or the number of distinct phases. To substantiate the versatility of this approach, which combines convolutional neural networks with quantum Monte Carlo sampling, we map out the phase diagrams of interacting boson and fermion models both at zero and finite temperatures and show that first-order, second-order, and Kosterlitz-Thouless phase transitions can all be identified. We explicitly demonstrate that our approach is capable of identifying the phase transition to non-trivial many-body phases such as superfluids or topologically ordered phases without supervision
    • …
    corecore