2 research outputs found
On Stability of Tensor Networks and Canonical Forms
Tensor networks such as matrix product states (MPS) and projected entangled
pair states (PEPS) are commonly used to approximate quantum systems. These
networks are optimized in methods such as DMRG or evolved by local operators.
We provide bounds on the conditioning of tensor network representations to
sitewise perturbations. These bounds characterize the extent to which local
approximation error in the tensor sites of a tensor network can be amplified to
error in the tensor it represents. In known tensor network methods, canonical
forms of tensor network are used to minimize such error amplification. However,
canonical forms are difficult to obtain for many tensor networks of interest.
We quantify the extent to which error can be amplified in general tensor
networks, yielding estimates of the benefit of the use of canonical forms. For
the MPS and PEPS tensor networks, we provide simple forms on the worst-case
error amplification. Beyond theoretical error bounds, we experimentally study
the dependence of the error on the size of the network for perturbed random MPS
tensor networks.Comment: 24 pages, 7 figures, comments welcome
Distributed-Memory DMRG via Sparse and Dense Parallel Tensor Contractions
The Density Matrix Renormalization Group (DMRG) algorithm is a powerful tool
for solving eigenvalue problems to model quantum systems. DMRG relies on tensor
contractions and dense linear algebra to compute properties of condensed matter
physics systems. However, its efficient parallel implementation is challenging
due to limited concurrency, large memory footprint, and tensor sparsity. We
mitigate these problems by implementing two new parallel approaches that handle
block sparsity arising in DMRG, via Cyclops, a distributed memory tensor
contraction library. We benchmark their performance on two physical systems
using the Blue Waters and Stampede2 supercomputers. Our DMRG performance is
improved by up to 5.9X in runtime and 99X in processing rate over ITensor, at
roughly comparable computational resource use. This enables higher accuracy
calculations via larger tensors for quantum state approximation. We demonstrate
that despite having limited concurrency, DMRG is weakly scalable with the use
of efficient parallel tensor contraction mechanisms