1,137 research outputs found

    Scalable Task-Based Algorithm for Multiplication of Block-Rank-Sparse Matrices

    Full text link
    A task-based formulation of Scalable Universal Matrix Multiplication Algorithm (SUMMA), a popular algorithm for matrix multiplication (MM), is applied to the multiplication of hierarchy-free, rank-structured matrices that appear in the domain of quantum chemistry (QC). The novel features of our formulation are: (1) concurrent scheduling of multiple SUMMA iterations, and (2) fine-grained task-based composition. These features make it tolerant of the load imbalance due to the irregular matrix structure and eliminate all artifactual sources of global synchronization.Scalability of iterative computation of square-root inverse of block-rank-sparse QC matrices is demonstrated; for full-rank (dense) matrices the performance of our SUMMA formulation usually exceeds that of the state-of-the-art dense MM implementations (ScaLAPACK and Cyclops Tensor Framework).Comment: 8 pages, 6 figures, accepted to IA3 2015. arXiv admin note: text overlap with arXiv:1504.0504

    Tensor network representations from the geometry of entangled states

    Get PDF
    Tensor network states provide successful descriptions of strongly correlated quantum systems with applications ranging from condensed matter physics to cosmology. Any family of tensor network states possesses an underlying entanglement structure given by a graph of maximally entangled states along the edges that identify the indices of the tensors to be contracted. Recently, more general tensor networks have been considered, where the maximally entangled states on edges are replaced by multipartite entangled states on plaquettes. Both the structure of the underlying graph and the dimensionality of the entangled states influence the computational cost of contracting these networks. Using the geometrical properties of entangled states, we provide a method to construct tensor network representations with smaller effective bond dimension. We illustrate our method with the resonating valence bond state on the kagome lattice.Comment: 35 pages, 9 figure

    Unifying Projected Entangled Pair States contractions

    Full text link
    The approximate contraction of a Projected Entangled Pair States (PEPS) tensor network is a fundamental ingredient of any PEPS algorithm, required for the optimization of the tensors in ground state search or time evolution, as well as for the evaluation of expectation values. An exact contraction is in general impossible, and the choice of the approximating procedure determines the efficiency and accuracy of the algorithm. We analyze different previous proposals for this approximation, and show that they can be understood via the form of their environment, i.e. the operator that results from contracting part of the network. This provides physical insight into the limitation of various approaches, and allows us to introduce a new strategy, based on the idea of clusters, that unifies previous methods. The resulting contraction algorithm interpolates naturally between the cheapest and most imprecise and the most costly and most precise method. We benchmark the different algorithms with finite PEPS, and show how the cluster strategy can be used for both the tensor optimization and the calculation of expectation values. Additionally, we discuss its applicability to the parallelization of PEPS and to infinite systems (iPEPS).Comment: 28 pages, 15 figures, accepted versio

    RosneT: A block tensor algebra library for out-of-core quantum computing simulation

    Get PDF
    With the advent of more powerful Quantum Computers, the need for larger Quantum Simulations has boosted. As the amount of resources grows exponentially with size of the target system Tensor Networks emerge as an optimal framework with which we represent Quantum States in tensor factorizations. As the extent of a tensor network increases, so does the size of intermediate tensors requiring HPC tools for their manipulation. Simulations of medium-sized circuits cannot fit on local memory, and solutions for distributed contraction of tensors are scarce. In this work we present RosneT, a library for distributed, out-of-core block tensor algebra. We use the PyCOMPSs programming model to transform tensor operations into a collection of tasks handled by the COMPSs runtime, targeting executions in existing and upcoming Exascale supercomputers. We report results validating our approach showing good scalability in simulations of Quantum circuits of up to 53 qubits.We acknowledge support from project QuantumCAT (ref. 001- P-001644), co-funded by the Generalitat de Catalunya and the European Union Regional Development Fund within the ERDF Operational Program of Catalunya, and European Union’s Horizon 2020 research and innovation programme under grant agreement No 951911 (AI4Media). This work has also been partially supported by the Spanish Government (PID2019-107255GB) and by Generalitat de Catalunya (contract 2014-SGR-1051). This work is co-funded by the European Regional Development Fund under the framework of the ERFD Operative Programme for Catalunya 2014-2020, with 1.527.637,88C. Anna Queralt is a Serra Hunter Fellow.Peer ReviewedPostprint (author's final draft
    • …
    corecore