5 research outputs found
Parameterization of Tensor Network Contraction
We present a conceptually clear and algorithmically useful framework for parameterizing the costs of tensor network contraction. Our framework is completely general, applying to tensor networks with arbitrary bond dimensions, open legs, and hyperedges. The fundamental objects of our framework are rooted and unrooted contraction trees, which represent classes of contraction orders. Properties of a contraction tree correspond directly and precisely to the time and space costs of tensor network contraction. The properties of rooted contraction trees give the costs of parallelized contraction algorithms. We show how contraction trees relate to existing tree-like objects in the graph theory literature, bringing to bear a wide range of graph algorithms and tools to tensor network contraction. Independent of tensor networks, we show that the edge congestion of a graph is almost equal to the branchwidth of its line graph
Simulating quantum computations with Tutte polynomials
We establish a classical heuristic algorithm for exactly computing quantum probability amplitudes. Our algorithm is based on mapping output probability amplitudes of quantum circuits to evaluations of the Tutte polynomial of graphic matroids. The algorithm evaluates the Tutte polynomial recursively using the deletion–contraction property while attempting to exploit structural properties of the matroid. We consider several variations of our algorithm and present experimental results comparing their performance on two classes of random quantum circuits. Further, we obtain an explicit form for Clifford circuit amplitudes in terms of matroid invariants and an alternative efficient classical algorithm for computing the output probability amplitudes of Clifford circuits
Low Rank Approximation for General Tensor Networks
We study the problem of approximating a given tensor with modes with an arbitrary tensor network of rank
-- that is, a graph , where , together with a
collection of tensors which are contracted in the manner
specified by to obtain a tensor . For each mode of corresponding
to an edge incident to , the dimension is , and we wish to find
such that the Frobenius norm distance between and is minimized. This
generalizes a number of well-known tensor network decompositions, such as the
Tensor Train, Tensor Ring, Tucker, and PEPS decompositions. We approximate
by a binary tree network with cores, such that the dimension on
each edge of this network is at most , where is the maximum degree of and is its
treewidth, such that . The
running time of our algorithm is , where is the number of
nonzero entries of . Our algorithm is based on a new dimensionality
reduction technique for tensor decomposition which may be of independent
interest.
We also develop fixed-parameter tractable -approximation
algorithms for Tensor Train and Tucker decompositions, improving the running
time of Song, Woodruff and Zhong (SODA, 2019) and avoiding the use of generic
polynomial system solvers. We show that our algorithms have a nearly optimal
dependence on assuming that there is no -approximation
algorithm for the norm with better running time than brute force.
Finally, we give additional results for Tucker decomposition with robust loss
functions, and fixed-parameter tractable CP decomposition