9,111 research outputs found
Exploring corner transfer matrices and corner tensors for the classical simulation of quantum lattice systems
In this paper we explore the practical use of the corner transfer matrix and
its higher-dimensional generalization, the corner tensor, to develop tensor
network algorithms for the classical simulation of quantum lattice systems of
infinite size. This exploration is done mainly in one and two spatial
dimensions (1d and 2d). We describe a number of numerical algorithms based on
corner matri- ces and tensors to approximate different ground state properties
of these systems. The proposed methods make also use of matrix product
operators and projected entangled pair operators, and naturally preserve
spatial symmetries of the system such as translation invariance. In order to
assess the validity of our algorithms, we provide preliminary benchmarking
calculations for the spin-1/2 quantum Ising model in a transverse field in both
1d and 2d. Our methods are a plausible alternative to other well-established
tensor network approaches such as iDMRG and iTEBD in 1d, and iPEPS and TERG in
2d. The computational complexity of the proposed algorithms is also considered
and, in 2d, important differences are found depending on the chosen simulation
scheme. We also discuss further possibilities, such as 3d quantum lattice
systems, periodic boundary conditions, and real time evolution. This discussion
leads us to reinterpret the standard iTEBD and iPEPS algorithms in terms of
corner transfer matrices and corner tensors. Our paper also offers a
perspective on many properties of the corner transfer matrix and its
higher-dimensional generalizations in the light of novel tensor network
methods.Comment: 25 pages, 32 figures, 2 tables. Revised version. Technical details on
some of the algorithms have been moved to appendices. To appear in PR
Sparse Tensor Transpositions
We present a new algorithm for transposing sparse tensors called Quesadilla.
The algorithm converts the sparse tensor data structure to a list of
coordinates and sorts it with a fast multi-pass radix algorithm that exploits
knowledge of the requested transposition and the tensors input partial
coordinate ordering to provably minimize the number of parallel partial sorting
passes. We evaluate both a serial and a parallel implementation of Quesadilla
on a set of 19 tensors from the FROSTT collection, a set of tensors taken from
scientific and data analytic applications. We compare Quesadilla and a
generalization, Top-2-sadilla to several state of the art approaches, including
the tensor transposition routine used in the SPLATT tensor factorization
library. In serial tests, Quesadilla was the best strategy for 60% of all
tensor and transposition combinations and improved over SPLATT by at least 19%
in half of the combinations. In parallel tests, at least one of Quesadilla or
Top-2-sadilla was the best strategy for 52% of all tensor and transposition
combinations.Comment: This work will be the subject of a brief announcement at the 32nd ACM
Symposium on Parallelism in Algorithms and Architectures (SPAA '20
The Galois Complexity of Graph Drawing: Why Numerical Solutions are Ubiquitous for Force-Directed, Spectral, and Circle Packing Drawings
Many well-known graph drawing techniques, including force directed drawings,
spectral graph layouts, multidimensional scaling, and circle packings, have
algebraic formulations. However, practical methods for producing such drawings
ubiquitously use iterative numerical approximations rather than constructing
and then solving algebraic expressions representing their exact solutions. To
explain this phenomenon, we use Galois theory to show that many variants of
these problems have solutions that cannot be expressed by nested radicals or
nested roots of low-degree polynomials. Hence, such solutions cannot be
computed exactly even in extended computational models that include such
operations.Comment: Graph Drawing 201
Practical Sparse Matrices in C++ with Hybrid Storage and Template-Based Expression Optimisation
Despite the importance of sparse matrices in numerous fields of science,
software implementations remain difficult to use for non-expert users,
generally requiring the understanding of underlying details of the chosen
sparse matrix storage format. In addition, to achieve good performance, several
formats may need to be used in one program, requiring explicit selection and
conversion between the formats. This can be both tedious and error-prone,
especially for non-expert users. Motivated by these issues, we present a
user-friendly and open-source sparse matrix class for the C++ language, with a
high-level application programming interface deliberately similar to the widely
used MATLAB language. This facilitates prototyping directly in C++ and aids the
conversion of research code into production environments. The class internally
uses two main approaches to achieve efficient execution: (i) a hybrid storage
framework, which automatically and seamlessly switches between three underlying
storage formats (compressed sparse column, Red-Black tree, coordinate list)
depending on which format is best suited and/or available for specific
operations, and (ii) a template-based meta-programming framework to
automatically detect and optimise execution of common expression patterns.
Empirical evaluations on large sparse matrices with various densities of
non-zero elements demonstrate the advantages of the hybrid storage framework
and the expression optimisation mechanism.Comment: extended and revised version of an earlier conference paper
arXiv:1805.0338
- …