653 research outputs found

    Tensor network states and algorithms in the presence of a global SU(2) symmetry

    Full text link
    The benefits of exploiting the presence of symmetries in tensor network algorithms have been extensively demonstrated in the context of matrix product states (MPSs). These include the ability to select a specific symmetry sector (e.g. with a given particle number or spin), to ensure the exact preservation of total charge, and to significantly reduce computational costs. Compared to the case of a generic tensor network, the practical implementation of symmetries in the MPS is simplified by the fact that tensors only have three indices (they are trivalent, just as the Clebsch-Gordan coefficients of the symmetry group) and are organized as a one-dimensional array of tensors, without closed loops. Instead, a more complex tensor network, one where tensors have a larger number of indices and/or a more elaborate network structure, requires a more general treatment. In two recent papers, namely (i) [Phys. Rev. A 82, 050301 (2010)] and (ii) [Phys. Rev. B 83, 115125 (2011)], we described how to incorporate a global internal symmetry into a generic tensor network algorithm based on decomposing and manipulating tensors that are invariant under the symmetry. In (i) we considered a generic symmetry group G that is compact, completely reducible and multiplicity free, acting as a global internal symmetry. Then in (ii) we described the practical implementation of Abelian group symmetries. In this paper we describe the implementation of non-Abelian group symmetries in great detail and for concreteness consider an SU(2) symmetry. Our formalism can be readily extended to more exotic symmetries associated with conservation of total fermionic or anyonic charge. As a practical demonstration, we describe the SU(2)-invariant version of the multi-scale entanglement renormalization ansatz and apply it to study the low energy spectrum of a quantum spin chain with a global SU(2) symmetry.Comment: 32 pages, 37 figure

    Tensor network states and algorithms in the presence of a global U(1) symmetry

    Get PDF
    Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. In a recent paper [arXiv:0907.2994v1] we discussed how to incorporate a global internal symmetry, given by a compact, completely reducible group G, into tensor network decompositions and algorithms. Here we specialize to the case of Abelian groups and, for concreteness, to a U(1) symmetry, often associated with particle number conservation. We consider tensor networks made of tensors that are invariant (or covariant) under the symmetry, and explain how to decompose and manipulate such tensors in order to exploit their symmetry. In numerical calculations, the use of U(1) symmetric tensors allows selection of a specific number of particles, ensures the exact preservation of particle number, and significantly reduces computational costs. We illustrate all these points in the context of the multi-scale entanglement renormalization ansatz.Comment: 22 pages, 25 figures, RevTeX

    The Tensor Networks Anthology: Simulation techniques for many-body quantum lattice systems

    Full text link
    We present a compendium of numerical simulation techniques, based on tensor network methods, aiming to address problems of many-body quantum mechanics on a classical computer. The core setting of this anthology are lattice problems in low spatial dimension at finite size, a physical scenario where tensor network methods, both Density Matrix Renormalization Group and beyond, have long proven to be winning strategies. Here we explore in detail the numerical frameworks and methods employed to deal with low-dimension physical setups, from a computational physics perspective. We focus on symmetries and closed-system simulations in arbitrary boundary conditions, while discussing the numerical data structures and linear algebra manipulation routines involved, which form the core libraries of any tensor network code. At a higher level, we put the spotlight on loop-free network geometries, discussing their advantages, and presenting in detail algorithms to simulate low-energy equilibrium states. Accompanied by discussions of data structures, numerical techniques and performance, this anthology serves as a programmer's companion, as well as a self-contained introduction and review of the basic and selected advanced concepts in tensor networks, including examples of their applications.Comment: 115 pages, 56 figure

    Recovering Structured Low-rank Operators Using Nuclear Norms

    Get PDF
    This work considers the problem of recovering matrices and operators from limited and/or noisy observations. Whereas matrices result from summing tensor products of vectors, operators result from summing tensor products of matrices. These constructions lead to viewing both matrices and operators as the sum of "simple" rank-1 factors. A popular line of work in this direction is low-rank matrix recovery, i.e., using linear measurements of a matrix to reconstruct it as the sum of few rank-1 factors. Rank minimization problems are hard in general, and a popular approach to avoid them is convex relaxation. Using the trace norm as a surrogate for rank, the low-rank matrix recovery problem becomes convex. While the trace norm has received much attention in the literature, other convexifications are possible. This thesis focuses on the class of nuclear norms—a class that includes the trace norm itself. Much as the trace norm is a convex surrogate for the matrix rank, other nuclear norms provide convex complexity measures for additional matrix structure. Namely, nuclear norms measure the structure of the factors used to construct the matrix. Transitioning to the operator framework allows for novel uses of nuclear norms in recovering these structured matrices. In particular, this thesis shows how to lift structured matrix factorization problems to rank-1 operator recovery problems. This new viewpoint allows nuclear norms to measure richer types of structures present in matrix factorizations. This work also includes a Python software package to model and solve structured operator recovery problems. Systematic numerical experiments in operator denoising demonstrate the effectiveness of nuclear norms in recovering structured operators. In particular, choosing a specific nuclear norm that corresponds to the underlying factor structure of the operator improves the performance of the recovery procedures when compared, for instance, to the trace norm. Applications in hyperspectral imaging and self-calibration demonstrate the additional flexibility gained by utilizing operator (as opposed to matrix) factorization models.</p

    Algorithms for Large-Scale Sparse Tensor Factorization

    Get PDF
    University of Minnesota Ph.D. dissertation. April 2019. Major: Computer Science. Advisor: George Karypis. 1 computer file (PDF); xiv, 153 pages.Tensor factorization is a technique for analyzing data that features interactions of data along three or more axes, or modes. Many fields such as retail, health analytics, and cybersecurity utilize tensor factorization to gain useful insights and make better decisions. The tensors that arise in these domains are increasingly large, sparse, and high dimensional. Factoring these tensors is computationally expensive, if not infeasible. The ubiquity of multi-core processors and large-scale clusters motivates the development of scalable parallel algorithms to facilitate these computations. However, sparse tensor factorizations often achieve only a small fraction of potential performance due to challenges including data-dependent parallelism and memory accesses, high memory consumption, and frequent fine-grained synchronizations among compute cores. This thesis presents a collection of algorithms for factoring sparse tensors on modern parallel architectures. This work is focused on developing algorithms that are scalable while being memory- and operation-efficient. We address a number of challenges across various forms of tensor factorizations and emphasize results on large, real-world datasets
    corecore