40 research outputs found

    Study of Condensed Matter Systems with Monte Carlo Simulation on Heterogeneous Computing Systems

    Get PDF
    We study the Edwards-Anderson model on a simple cubic lattice with a finite constant external field. We employ an indicator composed of a ratio of susceptibilities at finite momenta, which was recently proposed to avoid the difficulties of a zero momentum quantity, for capturing the spin glass phase transition. Unfortunately, this new indicator is fairly noisy, so a large pool of samples at low temperature and small external field are needed to generate results with a sufficiently small statistical error for analysis. We thus implement the Monte Carlo method using graphics processing units to drastically speed up the simulation. We confirm previous findings that conventional indicators for the spin glass transition, including the Binder ratio and the correlation length do not show any indication of a transition for rather low temperatures. However, the ratio of spin glass susceptibilities does show crossing behavior, albeit a systematic analysis is beyond the reach of the present data. This reveals the difficulty with current numerical methods and computing capability in studying this problem. One of the fundamental challenges of theoretical condensed matter physics is the accurate solution of quantum impurity models. By taking expansion in the hybridization about an exactly solved local limit, one can formulate a quantum impurity solver. We implement the hybridization expansion quantum impurity solver on Intel Xeon Phi accelerators, and aim to apply this approach on the Dynamic Hubbard Models

    Stochastic Simulated Quantum Annealing for Fast Solution of Combinatorial Optimization Problems

    Full text link
    In this paper, we introduce stochastic simulated quantum annealing (SSQA) for large-scale combinatorial optimization problems. SSQA is designed based on stochastic computing and quantum Monte Carlo, which can simulate quantum annealing (QA) by using multiple replicas of spins (probabilistic bits) in classical computing. The use of stochastic computing leads to an efficient parallel spin-state update algorithm, enabling quick search for a solution around the global minimum energy. Therefore, SSQA realizes quantum-like annealing for large-scale problems and can handle fully connected models in combinatorial optimization, unlike QA. The proposed method is evaluated in MATLAB on graph isomorphism problems, which are typical combinatorial optimization problems. The proposed method achieves a convergence speed an order of magnitude faster than a conventional stochastic simulaated annealing method. Additionally, it can handle a 100-times larger problem size compared to QA and a 25-times larger problem size compared to a traditional SA method, respectively, for similar convergence probabilities.Comment: 14 pages, 8 figure

    Spintronics-based Reconfigurable Ising Model Architecture

    Get PDF
    Published in the International Symposium On Quality Electronic Design (ISQED), March 2020The Ising model has been explored as a framework for modeling NP-hard problems, with several diverse systems proposed to solve it. The Magnetic Tunnel Junction (MTJ)-based Magnetic RAM is capable of replacing CMOS in memory chips. In this paper, we propose the use of MTJs for representing the units of an Ising model and leveraging its intrinsic physics for finding the ground state of the system through annealing. We design the structure of a basic MTJ-based Ising cell capable of performing the functions essential to an Ising solver. A technique to use the basic Ising cell for scaling to large problems is described. We then go on to propose Ising-FPGA, a parallel and reconfigurable architecture that can be used to map a large class of NP-hard problems, and show how a standard Place and Route tool can be utilized to program the Ising-FPGA. The effects of this hardware platform on our proposed design are characterized and methods to overcome these effects are prescribed. We discuss how two representative NP-hard problems can be mapped to the Ising model. Simulation results show the effectiveness of MTJs as Ising units by producing solutions close/comparable to the optimum, and demonstrate that our design methodology holds the capability to account for the effects of the hardware.This work was supported by the National Science Foundation(NSF) under Grant 164242

    Demonstration of a scaling advantage for a quantum annealer over simulated annealing

    Full text link
    The observation of an unequivocal quantum speedup remains an elusive objective for quantum computing. The D-Wave quantum annealing processors have been at the forefront of experimental attempts to address this goal, given their relatively large numbers of qubits and programmability. A complete determination of the optimal time-to-solution (TTS) using these processors has not been possible to date, preventing definitive conclusions about the presence of a scaling advantage. The main technical obstacle has been the inability to verify an optimal annealing time within the available range. Here we overcome this obstacle and present a class of problem instances for which we observe an optimal annealing time using a D-Wave 2000Q processor over a range spanning up to more than 20002000 qubits. This allows us to perform an optimal TTS benchmarking analysis and perform a comparison to several classical algorithms, including simulated annealing, spin-vector Monte Carlo, and discrete-time simulated quantum annealing. We establish the first example of a scaling advantage for an experimental quantum annealer over classical simulated annealing: we find that the D-Wave device exhibits certifiably better scaling than simulated annealing, with 95%95\% confidence, over the range of problem sizes that we can test. However, we do not find evidence for a quantum speedup: simulated quantum annealing exhibits the best scaling by a significant margin. Our construction of instance classes with verifiably optimal annealing times opens up the possibility of generating many new such classes, paving the way for further definitive assessments of scaling advantages using current and future quantum annealing devices.Comment: 26 pages, 22 figures. v2: Updated benchmarking results with additional analysis. v3: Updated to published versio

    Spintronics-based Architectures for non-von Neumann Computing

    Get PDF
    The scaling of transistor technology in the last few decades has significantly impacted our lives. It has given birth to different kinds of computational workloads which are becoming increasingly relevant. Some of the most prominent examples are Machine Learning based tasks such as image classification and pattern recognition which use Deep Neural Networks that are highly computation and memory-intensive. The traditional and general-purpose architectures that we use today typically exhibit high energy and latency on such computations. This, and the apparent end of Moore's law of scaling, has got researchers into looking for devices beyond CMOS and for computational paradigms that are non-conventional. In this dissertation, we focus on a spintronic device, the Magnetic Tunnel Junction (MTJ), which has demonstrated potential as cache and embedded memory. We look into how the MTJ can be used beyond memory and deployed in various non-conventional and non-von Neumann architectures for accelerating computations or making them energy efficient. First, we investigate into Stochastic Computing (SC) and show how MTJs can be used to build energy-efficient Neural Network (NN) hardware in this domain. SC is primarily bit-serial computing which requires simple logic gates for arithmetic operations. We explore the use of MTJs as Stochastic Number Generators (SNG) by exploiting their probabilistic switching characteristics and propose an energy-efficient MTJ-SNG. It is deployed as part of an NN hardware implemented in the SC domain. Its characteristics allow for achieving further energy efficiency through NN weight approximation, towards which we develop an optimization problem. Next, we turn our attention to analog computing and propose a method for training of analog Neural Network hardware. We consider a resistive MTJ crossbar architecture for representing an NN layer since it is capable of in-memory computing and performs matrix-vector multiplications with O(1) time complexity. We propose the on-chip training of the NN crossbar since, first, it can leverage the parallelism in the crossbar to perform weight update, second, it allows to take into account the device variations, and third, it enables avoiding large sneak currents in transistor-less crossbars which can cause undesired weight changes. Lastly, we propose an MTJ-based non-von Neumann hardware platform for solving combinatorial optimization problems since they are NP-hard. We adopt the Ising model for encoding such problems and solving them with simulated annealing. We let MTJs represent Ising units, design a scalable circuit capable of performing Ising computations and develop a reconfigurable architecture to which any NP-hard problem can be mapped. We also suggest methods to take into account the non-idealities present in the proposed hardware
    corecore