2,009 research outputs found

    Investigating ultrafast quantum magnetism with machine learning

    Get PDF
    We investigate the efficiency of the recently proposed Restricted Boltzmann Machine (RBM) representation of quantum many-body states to study both the static properties and quantum spin dynamics in the two-dimensional Heisenberg model on a square lattice. For static properties we find close agreement with numerically exact Quantum Monte Carlo results in the thermodynamical limit. For dynamics and small systems, we find excellent agreement with exact diagonalization, while for systems up to N=256 spins close consistency with interacting spin-wave theory is obtained. In all cases the accuracy converges fast with the number of network parameters, giving access to much bigger systems than feasible before. This suggests great potential to investigate the quantum many-body dynamics of large scale spin systems relevant for the description of magnetic materials strongly out of equilibrium.Comment: 18 pages, 5 figures, data up to N=256 spins added, minor change

    Contrastive Hebbian Learning with Random Feedback Weights

    Full text link
    Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning
    • …
    corecore