71 research outputs found

    Understanding Novel Superconductors with Ab Initio Calculations

    Full text link
    This chapter gives an overview of the progress in the field of computational superconductivity. Following the MgB2 discovery (2001), there has been an impressive acceleration in the development of methods based on Density Functional Theory to compute the critical temperature and other physical properties of actual superconductors from first-principles. State-of-the-art ab-initio methods have reached predictive accuracy for conventional (phonon-mediated) superconductors, and substantial progress is being made also for unconventional superconductors. The aim of this chapter is to give an overview of the existing computational methods for superconductivity, and present selected examples of material discoveries that exemplify the main advancements.Comment: 38 pages, 10 figures, Contribution to Springer Handbook of Materials Modellin

    Phase transitions in random circuit sampling.

    Get PDF
    Undesired coupling to the surrounding environment destroys long-range correlations in quantum processors and hinders coherent evolution in the nominally available computational space. This noise is an outstanding challenge when leveraging the computation power of near-term quantum processors1. It has been shown that benchmarking random circuit sampling with cross-entropy benchmarking can provide an estimate of the effective size of the Hilbert space coherently available2-8. Nevertheless, quantum algorithms' outputs can be trivialized by noise, making them susceptible to classical computation spoofing. Here, by implementing an algorithm for random circuit sampling, we demonstrate experimentally that two phase transitions are observable with cross-entropy benchmarking, which we explain theoretically with a statistical model. The first is a dynamical transition as a function of the number of cycles and is the continuation of the anti-concentration point in the noiseless case. The second is a quantum phase transition controlled by the error per cycle; to identify it analytically and experimentally, we create a weak-link model, which allows us to vary the strength of the noise versus coherent evolution. Furthermore, by presenting a random circuit sampling experiment in the weak-noise phase with 67 qubits at 32 cycles, we demonstrate that the computational cost of our experiment is beyond the capabilities of existing classical supercomputers. Our experimental and theoretical work establishes the existence of transitions to a stable, computationally complex phase that is reachable with current quantum processors

    Purification-based quantum error mitigation of pair-correlated electron simulations

    Get PDF
    An important measure of the development of quantum computing platforms has been the simulation of increasingly complex physical systems. Before fault-tolerant quantum computing, robust error-mitigation strategies were necessary to continue this growth. Here, we validate recently introduced error-mitigation strategies that exploit the expectation that the ideal output of a quantum algorithm would be a pure state. We consider the task of simulating electron systems in the seniority-zero subspace where all electrons are paired with their opposite spin. This affords a computational stepping stone to a fully correlated model. We compare the performance of error mitigations on the basis of doubling quantum resources in time or in space on up to 20 qubits of a superconducting qubit quantum processor. We observe a reduction of error by one to two orders of magnitude below less sophisticated techniques such as postselection. We study how the gain from error mitigation scales with the system size and observe a polynomial suppression of error with increased resources. Extrapolation of our results indicates that substantial hardware improvements will be required for classically intractable variational chemistry simulations

    The Space Physics Environment Data Analysis System (SPEDAS)

    Get PDF
    With the advent of the Heliophysics/Geospace System Observatory (H/GSO), a complement of multi-spacecraft missions and ground-based observatories to study the space environment, data retrieval, analysis, and visualization of space physics data can be daunting. The Space Physics Environment Data Analysis System (SPEDAS), a grass-roots software development platform (www.spedas.org), is now officially supported by NASA Heliophysics as part of its data environment infrastructure. It serves more than a dozen space missions and ground observatories and can integrate the full complement of past and upcoming space physics missions with minimal resources, following clear, simple, and well-proven guidelines. Free, modular and configurable to the needs of individual missions, it works in both command-line (ideal for experienced users) and Graphical User Interface (GUI) mode (reducing the learning curve for first-time users). Both options have “crib-sheets,” user-command sequences in ASCII format that can facilitate record-and-repeat actions, especially for complex operations and plotting. Crib-sheets enhance scientific interactions, as users can move rapidly and accurately from exchanges of technical information on data processing to efficient discussions regarding data interpretation and science. SPEDAS can readily query and ingest all International Solar Terrestrial Physics (ISTP)-compatible products from the Space Physics Data Facility (SPDF), enabling access to a vast collection of historic and current mission data. The planned incorporation of Heliophysics Application Programmer’s Interface (HAPI) standards will facilitate data ingestion from distributed datasets that adhere to these standards. Although SPEDAS is currently Interactive Data Language (IDL)-based (and interfaces to Java-based tools such as Autoplot), efforts are under-way to expand it further to work with python (first as an interface tool and potentially even receiving an under-the-hood replacement). We review the SPEDAS development history, goals, and current implementation. We explain its “modes of use” with examples geared for users and outline its technical implementation and requirements with software developers in mind. We also describe SPEDAS personnel and software management, interfaces with other organizations, resources and support structure available to the community, and future development plans
    corecore