37,263 research outputs found

    Random Quantum Circuits and Pseudo-Random Operators: Theory and Applications

    Full text link
    Pseudo-random operators consist of sets of operators that exhibit many of the important statistical features of uniformly distributed random operators. Such pseudo-random sets of operators are most useful whey they may be parameterized and generated on a quantum processor in a way that requires exponentially fewer resources than direct implementation of the uniformly random set. Efficient pseudo-random operators can overcome the exponential cost of random operators required for quantum communication tasks such as super-dense coding of quantum states and approximately secure quantum data-hiding, and enable efficient stochastic methods for noise estimation on prototype quantum processors. This paper summarizes some recently published work demonstrating a random circuit method for the implementation of pseudo-random unitary operators on a quantum processor [Emerson et al., Science 302:2098 (Dec.~19, 2003)], and further elaborates the theory and applications of pseudo-random states and operators.Comment: This paper is a synopsis of Emerson et al., Science 302: 2098 (Dec 19, 2003) and some related unpublished work; it is based on a talk given at QCMC04; 4 pages, 1 figure, aipproc.st

    SpECTRE: A Task-based Discontinuous Galerkin Code for Relativistic Astrophysics

    Get PDF
    We introduce a new relativistic astrophysics code, SpECTRE, that combines a discontinuous Galerkin method with a task-based parallelism model. SpECTRE's goal is to achieve more accurate solutions for challenging relativistic astrophysics problems such as core-collapse supernovae and binary neutron star mergers. The robustness of the discontinuous Galerkin method allows for the use of high-resolution shock capturing methods in regions where (relativistic) shocks are found, while exploiting high-order accuracy in smooth regions. A task-based parallelism model allows efficient use of the largest supercomputers for problems with a heterogeneous workload over disparate spatial and temporal scales. We argue that the locality and algorithmic structure of discontinuous Galerkin methods will exhibit good scalability within a task-based parallelism framework. We demonstrate the code on a wide variety of challenging benchmark problems in (non)-relativistic (magneto)-hydrodynamics. We demonstrate the code's scalability including its strong scaling on the NCSA Blue Waters supercomputer up to the machine's full capacity of 22,380 nodes using 671,400 threads.Comment: 41 pages, 13 figures, and 7 tables. Ancillary data contains simulation input file

    Scalable Task-Based Algorithm for Multiplication of Block-Rank-Sparse Matrices

    Full text link
    A task-based formulation of Scalable Universal Matrix Multiplication Algorithm (SUMMA), a popular algorithm for matrix multiplication (MM), is applied to the multiplication of hierarchy-free, rank-structured matrices that appear in the domain of quantum chemistry (QC). The novel features of our formulation are: (1) concurrent scheduling of multiple SUMMA iterations, and (2) fine-grained task-based composition. These features make it tolerant of the load imbalance due to the irregular matrix structure and eliminate all artifactual sources of global synchronization.Scalability of iterative computation of square-root inverse of block-rank-sparse QC matrices is demonstrated; for full-rank (dense) matrices the performance of our SUMMA formulation usually exceeds that of the state-of-the-art dense MM implementations (ScaLAPACK and Cyclops Tensor Framework).Comment: 8 pages, 6 figures, accepted to IA3 2015. arXiv admin note: text overlap with arXiv:1504.0504
    • …
    corecore