5 research outputs found

    Parameterization of Tensor Network Contraction

    Get PDF
    We present a conceptually clear and algorithmically useful framework for parameterizing the costs of tensor network contraction. Our framework is completely general, applying to tensor networks with arbitrary bond dimensions, open legs, and hyperedges. The fundamental objects of our framework are rooted and unrooted contraction trees, which represent classes of contraction orders. Properties of a contraction tree correspond directly and precisely to the time and space costs of tensor network contraction. The properties of rooted contraction trees give the costs of parallelized contraction algorithms. We show how contraction trees relate to existing tree-like objects in the graph theory literature, bringing to bear a wide range of graph algorithms and tools to tensor network contraction. Independent of tensor networks, we show that the edge congestion of a graph is almost equal to the branchwidth of its line graph

    Simulating quantum computations with Tutte polynomials

    Full text link
    We establish a classical heuristic algorithm for exactly computing quantum probability amplitudes. Our algorithm is based on mapping output probability amplitudes of quantum circuits to evaluations of the Tutte polynomial of graphic matroids. The algorithm evaluates the Tutte polynomial recursively using the deletion–contraction property while attempting to exploit structural properties of the matroid. We consider several variations of our algorithm and present experimental results comparing their performance on two classes of random quantum circuits. Further, we obtain an explicit form for Clifford circuit amplitudes in terms of matroid invariants and an alternative efficient classical algorithm for computing the output probability amplitudes of Clifford circuits

    Low Rank Approximation for General Tensor Networks

    Full text link
    We study the problem of approximating a given tensor with qq modes A∈Rn×…×nA \in \mathbb{R}^{n \times \ldots \times n} with an arbitrary tensor network of rank kk -- that is, a graph G=(V,E)G = (V, E), where ∣V∣=q|V| = q, together with a collection of tensors {Uv∣v∈V}\{U_v \mid v \in V\} which are contracted in the manner specified by GG to obtain a tensor TT. For each mode of UvU_v corresponding to an edge incident to vv, the dimension is kk, and we wish to find UvU_v such that the Frobenius norm distance between TT and AA is minimized. This generalizes a number of well-known tensor network decompositions, such as the Tensor Train, Tensor Ring, Tucker, and PEPS decompositions. We approximate AA by a binary tree network T′T' with O(q)O(q) cores, such that the dimension on each edge of this network is at most O~(kO(dt)⋅q/ε)\widetilde{O}(k^{O(dt)} \cdot q/\varepsilon), where dd is the maximum degree of GG and tt is its treewidth, such that ∥A−T′∥F2≤(1+ε)∥A−T∥F2\|A - T'\|_F^2 \leq (1 + \varepsilon) \|A - T\|_F^2. The running time of our algorithm is O(q⋅nnz(A))+n⋅poly(kdtq/ε)O(q \cdot \text{nnz}(A)) + n \cdot \text{poly}(k^{dt}q/\varepsilon), where nnz(A)\text{nnz}(A) is the number of nonzero entries of AA. Our algorithm is based on a new dimensionality reduction technique for tensor decomposition which may be of independent interest. We also develop fixed-parameter tractable (1+ε)(1 + \varepsilon)-approximation algorithms for Tensor Train and Tucker decompositions, improving the running time of Song, Woodruff and Zhong (SODA, 2019) and avoiding the use of generic polynomial system solvers. We show that our algorithms have a nearly optimal dependence on 1/ε1/\varepsilon assuming that there is no O(1)O(1)-approximation algorithm for the 2→42 \to 4 norm with better running time than brute force. Finally, we give additional results for Tucker decomposition with robust loss functions, and fixed-parameter tractable CP decomposition
    corecore