8,896 research outputs found

    Magic-State Functional Units: Mapping and Scheduling Multi-Level Distillation Circuits for Fault-Tolerant Quantum Architectures

    Full text link
    Quantum computers have recently made great strides and are on a long-term path towards useful fault-tolerant computation. A dominant overhead in fault-tolerant quantum computation is the production of high-fidelity encoded qubits, called magic states, which enable reliable error-corrected computation. We present the first detailed designs of hardware functional units that implement space-time optimized magic-state factories for surface code error-corrected machines. Interactions among distant qubits require surface code braids (physical pathways on chip) which must be routed. Magic-state factories are circuits comprised of a complex set of braids that is more difficult to route than quantum circuits considered in previous work [1]. This paper explores the impact of scheduling techniques, such as gate reordering and qubit renaming, and we propose two novel mapping techniques: braid repulsion and dipole moment braid rotation. We combine these techniques with graph partitioning and community detection algorithms, and further introduce a stitching algorithm for mapping subgraphs onto a physical machine. Our results show a factor of 5.64 reduction in space-time volume compared to the best-known previous designs for magic-state factories.Comment: 13 pages, 10 figure

    Accelerating Large-Scale Data Analysis by Offloading to High-Performance Computing Libraries using Alchemist

    Full text link
    Apache Spark is a popular system aimed at the analysis of large data sets, but recent studies have shown that certain computations---in particular, many linear algebra computations that are the basis for solving common machine learning problems---are significantly slower in Spark than when done using libraries written in a high-performance computing framework such as the Message-Passing Interface (MPI). To remedy this, we introduce Alchemist, a system designed to call MPI-based libraries from Apache Spark. Using Alchemist with Spark helps accelerate linear algebra, machine learning, and related computations, while still retaining the benefits of working within the Spark environment. We discuss the motivation behind the development of Alchemist, and we provide a brief overview of its design and implementation. We also compare the performances of pure Spark implementations with those of Spark implementations that leverage MPI-based codes via Alchemist. To do so, we use data science case studies: a large-scale application of the conjugate gradient method to solve very large linear systems arising in a speech classification problem, where we see an improvement of an order of magnitude; and the truncated singular value decomposition (SVD) of a 400GB three-dimensional ocean temperature data set, where we see a speedup of up to 7.9x. We also illustrate that the truncated SVD computation is easily scalable to terabyte-sized data by applying it to data sets of sizes up to 17.6TB.Comment: Accepted for publication in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, UK, 201

    Design issues for the Generic Stream Encapsulation (GSE) of IP datagrams over DVB-S2

    Get PDF
    The DVB-S2 standard has brought an unprecedented degree of novelty and flexibility in the way IP datagrams or other network level packets can be transmitted over DVB satellite links, with the introduction of an IP-friendly link layer - he continuous Generic Streams - and the adaptive combination of advanced error coding, modulation and spectrum management techniques. Recently approved by the DVB, the Generic Stream Encapsulation (GSE) used for carrying IP datagrams over DVBS2 implements solutions stemmed from a design rationale quite different from the one behind IP encapsulation schemes over its predecessor DVB-S. This paper highlights GSE's original design choices under the perspective of DVB-S2's innovative features and possibilities

    Accurate ionic forces and geometry optimization in linear-scaling density-functional theory with local orbitals

    Get PDF
    Linear scaling methods for density-functional theory (DFT) simulations are formulated in terms of localized orbitals in real space, rather than the delocalized eigenstates of conventional approaches. In local-orbital methods, relative to conventional DFT, desirable properties can be lost to some extent, such as the translational invariance of the total energy of a system with respect to small displacements and the smoothness of the potential-energy surface. This has repercussions for calculating accurate ionic forces and geometries. In this work we present results from onetep, our linear scaling method based on localized orbitals in real space. The use of psinc functions for the underlying basis set and on-the-fly optimization of the localized orbitals results in smooth potential-energy surfaces that are consistent with ionic forces calculated using the Hellmann-Feynman theorem. This enables accurate geometry optimization to be performed. Results for surface reconstructions in silicon are presented, along with three example systems demonstrating the performance of a quasi-Newton geometry optimization algorithm: an organic zwitterion, a point defect in an ionic crystal, and a semiconductor nanostructure.<br/

    Distributed Space Time Coding for Wireless Two-way Relaying

    Full text link
    We consider the wireless two-way relay channel, in which two-way data transfer takes place between the end nodes with the help of a relay. For the Denoise-And-Forward (DNF) protocol, it was shown by Koike-Akino et. al. that adaptively changing the network coding map used at the relay greatly reduces the impact of Multiple Access interference at the relay. The harmful effect of the deep channel fade conditions can be effectively mitigated by proper choice of these network coding maps at the relay. Alternatively, in this paper we propose a Distributed Space Time Coding (DSTC) scheme, which effectively removes most of the deep fade channel conditions at the transmitting nodes itself without any CSIT and without any need to adaptively change the network coding map used at the relay. It is shown that the deep fades occur when the channel fade coefficient vector falls in a finite number of vector subspaces of C2\mathbb{C}^2, which are referred to as the singular fade subspaces. DSTC design criterion referred to as the \textit{singularity minimization criterion} under which the number of such vector subspaces are minimized is obtained. Also, a criterion to maximize the coding gain of the DSTC is obtained. Explicit low decoding complexity DSTC designs which satisfy the singularity minimization criterion and maximize the coding gain for QAM and PSK signal sets are provided. Simulation results show that at high Signal to Noise Ratio, the DSTC scheme provides large gains when compared to the conventional Exclusive OR network code and performs slightly better than the adaptive network coding scheme proposed by Koike-Akino et. al.Comment: 27 pages, 4 figures, A mistake in the proof of Proposition 3 given in Appendix B correcte
    corecore