11,168 research outputs found

    A Parallel Tensor Network Contraction Algorithm and Its Applications in Quantum Computation

    Full text link
    Tensors are a natural generalization of matrices, and tensor networks are a natural generalization of matrix products. Despite the simple definition of tensor networks, they are versatile enough to represent many different kinds of "products" that arise in various theoretical and practical problems. In particular, the powerful computational model of quantum computation can be defined almost entirely in terms of matrix products and tensor products, both of which are special cases of tensor networks. As such, (classical) algorithms for evaluating tensor networks have profound importance in the study of quantum computation. In this thesis, we design and implement a parallel algorithm for tensor network contraction. In addition to finding efficient contraction orders for a tensor network, we also dynamically slice it into multiple sub-tasks with lower space and time costs, in order to evaluate the tensor network in parallel. We refer to such an evaluation strategy as a contraction scheme for the tensor network. In addition, we introduce a local optimization procedure that improves the efficiency of the contraction schemes we find. We also investigate the applications of our parallel tensor network contraction algorithm in quantum computation. The most ready application is the simulation of random quantum supremacy circuits, where we benchmark our algorithm to demonstrate its advantage over other similar tensor network based simulators. Other applications we found include evaluating the energy function of a Quantum Approximate Optimization Algorithm (QAOA), and simulating surface codes under a realistic error model with crosstalk.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163098/1/fangzh_1.pd

    Differentiable Programming Tensor Networks

    Full text link
    Differentiable programming is a fresh programming paradigm which composes parameterized algorithmic components and trains them using automatic differentiation (AD). The concept emerges from deep learning but is not only limited to training neural networks. We present theory and practice of programming tensor network algorithms in a fully differentiable way. By formulating the tensor network algorithm as a computation graph, one can compute higher order derivatives of the program accurately and efficiently using AD. We present essential techniques to differentiate through the tensor networks contractions, including stable AD for tensor decomposition and efficient backpropagation through fixed point iterations. As a demonstration, we compute the specific heat of the Ising model directly by taking the second order derivative of the free energy obtained in the tensor renormalization group calculation. Next, we perform gradient based variational optimization of infinite projected entangled pair states for quantum antiferromagnetic Heisenberg model and obtain start-of-the-art variational energy and magnetization with moderate efforts. Differentiable programming removes laborious human efforts in deriving and implementing analytical gradients for tensor network programs, which opens the door to more innovations in tensor network algorithms and applications.Comment: Typos corrected, discussion and refs added; revised version accepted for publication in PRX. Source code available at https://github.com/wangleiphy/tensorgra

    Improving the efficiency of variational tensor network algorithms

    Get PDF
    We present several results relating to the contraction of generic tensor networks and discuss their application to the simulation of quantum many-body systems using variational approaches based upon tensor network states. Given a closed tensor network T\mathcal{T}, we prove that if the environment of a single tensor from the network can be evaluated with computational cost κ\kappa, then the environment of any other tensor from T\mathcal{T} can be evaluated with identical cost κ\kappa. Moreover, we describe how the set of all single tensor environments from T\mathcal{T} can be simultaneously evaluated with fixed cost 3κ3\kappa. The usefulness of these results, which are applicable to a variety of tensor network methods, is demonstrated for the optimization of a Multi-scale Entanglement Renormalization Ansatz (MERA) for the ground state of a 1D quantum system, where they are shown to substantially reduce the computation time.Comment: 12 pages, 8 figures, RevTex 4.1, includes reference implementation. Software updated to v1.02: Resolved two scenarios in which multienv would generate errors for valid input

    Perfect Sampling with Unitary Tensor Networks

    Get PDF
    Tensor network states are powerful variational ans\"atze for many-body ground states of quantum lattice models. The use of Monte Carlo sampling techniques in tensor network approaches significantly reduces the cost of tensor contractions, potentially leading to a substantial increase in computational efficiency. Previous proposals are based on a Markov chain Monte Carlo scheme generated by locally updating configurations and, as such, must deal with equilibration and autocorrelation times, which result in a reduction of efficiency. Here we propose a perfect sampling scheme, with vanishing equilibration and autocorrelation times, for unitary tensor networks -- namely tensor networks based on efficiently contractible, unitary quantum circuits, such as unitary versions of the matrix product state (MPS) and tree tensor network (TTN), and the multi-scale entanglement renormalization ansatz (MERA). Configurations are directly sampled according to their probabilities in the wavefunction, without resorting to a Markov chain process. We also describe a partial sampling scheme that can result in a dramatic (basis-dependent) reduction of sampling error.Comment: 11 pages, 9 figures, renamed partial sampling to incomplete sampling for clarity, extra references, plus a variety of minor change

    Liouville Action as Path-Integral Complexity: From Continuous Tensor Networks to AdS/CFT

    Get PDF
    We propose an optimization procedure for Euclidean path-integrals that evaluate CFT wave functionals in arbitrary dimensions. The optimization is performed by minimizing certain functional, which can be interpreted as a measure of computational complexity, with respect to background metrics for the path-integrals. In two dimensional CFTs, this functional is given by the Liouville action. We also formulate the optimization for higher dimensional CFTs and, in various examples, find that the optimized hyperbolic metrics coincide with the time slices of expected gravity duals. Moreover, if we optimize a reduced density matrix, the geometry becomes two copies of the entanglement wedge and reproduces the holographic entanglement entropy. Our approach resembles a continuous tensor network renormalization and provides a concrete realization of the proposed interpretation of AdS/CFT as tensor networks. The present paper is an extended version of our earlier report arXiv:1703.00456 and includes many new results such as evaluations of complexity functionals, energy stress tensor, higher dimensional extensions and time evolutions of thermofield double states.Comment: 63 pages, 10 figure

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Unsupervised Generative Modeling Using Matrix Product States

    Full text link
    Generative modeling, which learns joint probability distribution from data and generates samples according to it, is an important task in machine learning and artificial intelligence. Inspired by probabilistic interpretation of quantum physics, we propose a generative model using matrix product states, which is a tensor network originally proposed for describing (particularly one-dimensional) entangled quantum states. Our model enjoys efficient learning analogous to the density matrix renormalization group method, which allows dynamically adjusting dimensions of the tensors and offers an efficient direct sampling approach for generative tasks. We apply our method to generative modeling of several standard datasets including the Bars and Stripes, random binary patterns and the MNIST handwritten digits to illustrate the abilities, features and drawbacks of our model over popular generative models such as Hopfield model, Boltzmann machines and generative adversarial networks. Our work sheds light on many interesting directions of future exploration on the development of quantum-inspired algorithms for unsupervised machine learning, which are promisingly possible to be realized on quantum devices.Comment: 11 pages, 12 figures (not including the TNs) GitHub Page: https://congzlwag.github.io/UnsupGenModbyMPS
    corecore