1,975 research outputs found

    Errors in scalable quantum Computers

    Get PDF
    A functional quantum computer potentially outperforms any classical machine exponentially in a number of important computational tasks. Therefore, its physical implementation has to scale efficiently in the number of qubits, specifically in tasks such as treatment of external error sources. Due to the intrinsic complexity and limited accessibility of quantum systems, the validation of quantum gates is fundamentally difficult. Randomized Benchmarking is a protocol to efficiently assess the average fidelity of only Clifford group gates. In this thesis we present a hybrid of Randomized Benchmarking and Monte Carlo sampling for the validation of arbitrary gates. It improves upon the efficiency of current methods while preserving error amplification and robustness against imperfect measurement, but is still exponentially hard. To achieve polynomial scaling, we introduce a symmetry benchmarking protocol that validates the conservation of inherent symmetries in quantum algorithms instead of gate fidelities. Adiabatic quantum computing is believed to be more robust against environmental effects, which we investigate in the typical regime of a scalable quantum computer using renormalization group theory. We show that a k-local Hamiltonian is in fact robust against environmental influence but multipartite entanglement is limited to combined system-bath state which we conclude to result in a more classical behavior more susceptible to thermal noise.Ein Quantencomputer wäre in einer Reihe wichtiger Berechnungen exponenziell effizienter als klassische Computer, unter Vorraussetzung einer fehlerarmen und skalierbaren Implementierung. Aufgrund der intrinsischen Komplexität und beschränkten Auslesbarkeit von Quantensystemen ist die Validierung von Quantengattern ungleich schwerer als die klassischer. Das Randomized Benchmarking Protokoll leistet dies effizient, ist jedoch beschränkt auf Cliffordgatter. In dieser Arbeit präsentieren wir ein Hybridprotokoll aus Interleaved Randomized Benchmarking und Monte Carlo Sampling zur Validierung von beliebigen Gattern. Trotz Verbesserung gegenüber vergleichbaren Protokollen skalieren die benötigten Ressourcen exponenziell. Um dies zu vermeiden entwickeln wir ein Protokoll, welches die Erhaltung von spezifischen Symmetrien von Quantenalgorithmen untersucht und dadurch Rückschlüsse auf die Fehlerrate der Quantenprozesse zulässt und demonstrieren seine Effizienz an relevanten Beispielen. Der Effekt von Umgebungseinflüssen auf adiabatische Quantencomputer wird als weit weniger gravierend angenommen als im Falle von konventionellen Systemen, ist jedoch im gleichen Maße weniger verstanden. Wir untersuchen diese Effekte mithilfe von Renormalisierungsgruppentheorie und zeigen, dass k-lokale Hamiltonoperatoren robust sind, vielfach verschränkte Zustände hingegen nur verschränkt mit der Umgebung existieren. Wir folgern daraus ein verstärkt thermisches Verhalten des Annealingprozesses.QEO/IARPA, Google, ScaleQI

    Numerical Methods for Electronic Structure Calculations of Materials

    Get PDF
    This is the published version. Copyright 2010 Society for Industrial and Applied MathematicsThe goal of this article is to give an overview of numerical problems encountered when determining the electronic structure of materials and the rich variety of techniques used to solve these problems. The paper is intended for a diverse scientific computing audience. For this reason, we assume the reader does not have an extensive background in the related physics. Our overview focuses on the nature of the numerical problems to be solved, their origin, and the methods used to solve the resulting linear algebra or nonlinear optimization problems. It is common knowledge that the behavior of matter at the nanoscale is, in principle, entirely determined by the Schrödinger equation. In practice, this equation in its original form is not tractable. Successful but approximate versions of this equation, which allow one to study nontrivial systems, took about five or six decades to develop. In particular, the last two decades saw a flurry of activity in developing effective software. One of the main practical variants of the Schrödinger equation is based on what is referred to as density functional theory (DFT). The combination of DFT with pseudopotentials allows one to obtain in an efficient way the ground state configuration for many materials. This article will emphasize pseudopotential-density functional theory, but other techniques will be discussed as well

    Spontaneous and induced dynamic correlations in glass-formers II: Model calculations and comparison to numerical simulations

    Get PDF
    We study in detail the predictions of various theoretical approaches, in particular mode-coupling theory (MCT) and kinetically constrained models (KCMs), concerning the time, temperature, and wavevector dependence of multi-point correlation functions that quantify the strength of both induced and spontaneous dynamical fluctuations. We also discuss the precise predictions of MCT concerning the statistical ensemble and microscopic dynamics dependence of these multi-point correlation functions. These predictions are compared to simulations of model fragile and strong glass-forming liquids. Overall, MCT fares quite well in the fragile case, in particular explaining the observed crucial role of the statistical ensemble and microscopic dynamics, while MCT predictions do not seem to hold in the strong case. KCMs provide a simplified framework for understanding how these multi-point correlation functions may encode dynamic correlations in glassy materials. However, our analysis highlights important unresolved questions concerning the application of KCMs to supercooled liquids.Comment: 23 pages, 12 fig

    From Dark Matter to the Earth's Deep Interior: There and Back Again

    Get PDF
    This thesis is a two-way transfer of knowledge between cosmology and seismology, aiming to substantially advance imaging methods and uncertainty quantification in both fields. I develop a method using wavelets to simulate the uncertainty in a set of existing global seismic tomography images to assess the robustness of mantle plume-like structures. Several plumes are identified, including one that is rarely discussed in the seismological literature. I present a new classification of the most likely deep mantle plumes from my automated method, potentially resolving past discrepancies between deep mantle plumes inferred by visual analysis of tomography models and other geophysical data. Following on from this, I create new images of the upper-most mantle and their associated uncertainties using a sparsity-promoting wavelet prior and an advanced probabilistic inversion scheme. These new images exhibit the expected tectonic features such as plate boundaries and continental cratons. Importantly, the uncertainties obtained are physically reasonable and informative, in that they reflect the heterogenous data distribution and also highlight artefacts due to an incomplete forward model. These inversions are a first step towards building a fully probabilistic upper-mantle model in a sparse wavelet basis. I then apply the same advanced probabilistic method to the problem of full-sky cosmological mass-mapping. However, this is severely limited by the computational complexity of high-resolution spherical harmonic transforms. In response to this, I use, for the first time in cosmology, a trans-dimensional algorithm to build galaxy cluster-scale mass-maps. This new approach performs better than the standard mass-mapping method, with the added benefit that uncertainties are naturally recovered. With more accurate mass-maps and uncertainties, this method will be a valuable tool for cosmological inference with the new high-resolution data expected from upcoming galaxy surveys, potentially providing new insights into the interactions of dark matter particles in colliding galaxy cluster systems

    Physics of Dense Emulsions via High-Performance Fully Resolved Simulations

    Get PDF

    Basic Understanding of Condensed Phases of Matter via Packing Models

    Full text link
    Packing problems have been a source of fascination for millenia and their study has produced a rich literature that spans numerous disciplines. Investigations of hard-particle packing models have provided basic insights into the structure and bulk properties of condensed phases of matter, including low-temperature states (e.g., molecular and colloidal liquids, crystals and glasses), multiphase heterogeneous media, granular media, and biological systems. The densest packings are of great interest in pure mathematics, including discrete geometry and number theory. This perspective reviews pertinent theoretical and computational literature concerning the equilibrium, metastable and nonequilibrium packings of hard-particle packings in various Euclidean space dimensions. In the case of jammed packings, emphasis will be placed on the "geometric-structure" approach, which provides a powerful and unified means to quantitatively characterize individual packings via jamming categories and "order" maps. It incorporates extremal jammed states, including the densest packings, maximally random jammed states, and lowest-density jammed structures. Packings of identical spheres, spheres with a size distribution, and nonspherical particles are also surveyed. We close this review by identifying challenges and open questions for future research.Comment: 33 pages, 20 figures, Invited "Perspective" submitted to the Journal of Chemical Physics. arXiv admin note: text overlap with arXiv:1008.298
    corecore