21,766 research outputs found

    Neural Networks Architecture Evaluation in a Quantum Computer

    Full text link
    In this work, we propose a quantum algorithm to evaluate neural networks architectures named Quantum Neural Network Architecture Evaluation (QNNAE). The proposed algorithm is based on a quantum associative memory and the learning algorithm for artificial neural networks. Unlike conventional algorithms for evaluating neural network architectures, QNNAE does not depend on initialization of weights. The proposed algorithm has a binary output and results in 0 with probability proportional to the performance of the network. And its computational cost is equal to the computational cost to train a neural network

    SchNet: A continuous-filter convolutional neural network for modeling quantum interactions

    Get PDF
    Deep learning has the potential to revolutionize quantum chemistry as it is ideally suited to learn representations for structured data and speed up the exploration of chemical space. While convolutional neural networks have proven to be the first choice for images, audio and video data, the atoms in molecules are not restricted to a grid. Instead, their precise locations contain essential physical information, that would get lost if discretized. Thus, we propose to use continuous-filter convolutional layers to be able to model local correlations without requiring the data to lie on a grid. We apply those layers in SchNet: a novel deep learning architecture modeling quantum interactions in molecules. We obtain a joint model for the total energy and interatomic forces that follows fundamental quantum-chemical principles. This includes rotationally invariant energy predictions and a smooth, differentiable potential energy surface. Our architecture achieves state-of-the-art performance for benchmarks of equilibrium molecules and molecular dynamics trajectories. Finally, we introduce a more challenging benchmark with chemical and structural variations that suggests the path for further work

    A Quantum Many-body Wave Function Inspired Language Modeling Approach

    Full text link
    The recently proposed quantum language model (QLM) aimed at a principled approach to modeling term dependency by applying the quantum probability theory. The latest development for a more effective QLM has adopted word embeddings as a kind of global dependency information and integrated the quantum-inspired idea in a neural network architecture. While these quantum-inspired LMs are theoretically more general and also practically effective, they have two major limitations. First, they have not taken into account the interaction among words with multiple meanings, which is common and important in understanding natural language text. Second, the integration of the quantum-inspired LM with the neural network was mainly for effective training of parameters, yet lacking a theoretical foundation accounting for such integration. To address these two issues, in this paper, we propose a Quantum Many-body Wave Function (QMWF) inspired language modeling approach. The QMWF inspired LM can adopt the tensor product to model the aforesaid interaction among words. It also enables us to reveal the inherent necessity of using Convolutional Neural Network (CNN) in QMWF language modeling. Furthermore, our approach delivers a simple algorithm to represent and match text/sentence pairs. Systematic evaluation shows the effectiveness of the proposed QMWF-LM algorithm, in comparison with the state of the art quantum-inspired LMs and a couple of CNN-based methods, on three typical Question Answering (QA) datasets.Comment: 10 pages,4 figures,CIK

    A general guide to applying machine learning to computer architecture

    Get PDF
    The resurgence of machine learning since the late 1990s has been enabled by significant advances in computing performance and the growth of big data. The ability of these algorithms to detect complex patterns in data which are extremely difficult to achieve manually, helps to produce effective predictive models. Whilst computer architects have been accelerating the performance of machine learning algorithms with GPUs and custom hardware, there have been few implementations leveraging these algorithms to improve the computer system performance. The work that has been conducted, however, has produced considerably promising results. The purpose of this paper is to serve as a foundational base and guide to future computer architecture research seeking to make use of machine learning models for improving system efficiency. We describe a method that highlights when, why, and how to utilize machine learning models for improving system performance and provide a relevant example showcasing the effectiveness of applying machine learning in computer architecture. We describe a process of data generation every execution quantum and parameter engineering. This is followed by a survey of a set of popular machine learning models. We discuss their strengths and weaknesses and provide an evaluation of implementations for the purpose of creating a workload performance predictor for different core types in an x86 processor. The predictions can then be exploited by a scheduler for heterogeneous processors to improve the system throughput. The algorithms of focus are stochastic gradient descent based linear regression, decision trees, random forests, artificial neural networks, and k-nearest neighbors.This work has been supported by the European Research Council (ERC) Advanced Grant RoMoL (Grant Agreemnt 321253) and by the Spanish Ministry of Science and Innovation (contract TIN 2015-65316P).Peer ReviewedPostprint (published version

    QFlow lite dataset: A machine-learning approach to the charge states in quantum dot experiments

    Full text link
    Over the past decade, machine learning techniques have revolutionized how research is done, from designing new materials and predicting their properties to assisting drug discovery to advancing cybersecurity. Recently, we added to this list by showing how a machine learning algorithm (a so-called learner) combined with an optimization routine can assist experimental efforts in the realm of tuning semiconductor quantum dot (QD) devices. Among other applications, semiconductor QDs are a candidate system for building quantum computers. The present-day tuning techniques for bringing the QD devices into a desirable configuration suitable for quantum computing that rely on heuristics do not scale with the increasing size of the quantum dot arrays required for even near-term quantum computing demonstrations. Establishing a reliable protocol for tuning that does not rely on the gross-scale heuristics developed by experimentalists is thus of great importance. To implement the machine learning-based approach, we constructed a dataset of simulated QD device characteristics, such as the conductance and the charge sensor response versus the applied electrostatic gate voltages. Here, we describe the methodology for generating the dataset, as well as its validation in training convolutional neural networks. We show that the learner's accuracy in recognizing the state of a device is ~96.5 % in both current- and charge-sensor-based training. We also introduce a tool that enables other researchers to use this approach for further research: QFlow lite - a Python-based mini-software suite that uses the dataset to train neural networks to recognize the state of a device and differentiate between states in experimental data. This work gives the definitive reference for the new dataset that will help enable researchers to use it in their experiments or to develop new machine learning approaches and concepts.Comment: 18 pages, 6 figures, 3 table
    • …
    corecore