21 research outputs found

    Generalized Maximum Entropy Methods as Limits of the Average Spectrum Method

    Full text link
    We show that in the continuum limit, the average spectrum method (ASM) is equivalent to maximizing R\'enyi entropies of order η\eta, of which Shannon entropy is the special case η=1\eta=1. The order of R\'enyi entropy is determined by the way the spectra are sampled. Our derivation also suggests a modification of R\'enyi entropy, giving it a non-trivial η0\eta\to0 limit. We show that the sharper peaks generally obtained in ASM are associated with entropies of order η<1\eta<1. Our work provides a generalization of the maximum entropy method that enables extracting more structure than the traditional method.Comment: 5 pages, 1 figur

    The Average Spectrum Method for Analytic Continuation: Efficient Blocked Modes Sampling and Dependence on Discretization Grid

    Full text link
    The average spectrum method is a promising approach for the analytic continuation of imaginary time or frequency data to the real axis. It determines the analytic continuation of noisy data from a functional average over all admissible spectral functions, weighted by how well they fit the data. Its main advantage is the apparent lack of adjustable parameters and smoothness constraints, using instead the information on the statistical noise in the data. Its main disadvantage is the enormous computational cost of performing the functional integral. Here we introduce an efficient implementation, based on the singular value decomposition of the integral kernel, eliminating this problem. It allows us to analyze the behavior of the average spectrum method in detail. We find that the discretization of the real-frequency grid, on which the spectral function is represented, biases the results. The distribution of the grid points plays the role of a default model while the number of grid points acts as a regularization parameter. We give a quantitative explanation for this behavior, point out the crucial role of the default model and provide a practical method for choosing it, making the average spectrum method a reliable and efficient technique for analytic continuation.Comment: 12 pages, 10 figure

    Extending the average spectrum method: Grid points sampling and density averaging

    Full text link
    Analytic continuation of imaginary time or frequency data to the real axis is a crucial step in extracting dynamical properties from quantum Monte Carlo simulations. The average spectrum method provides an elegant solution by integrating over all non-negative spectra weighted by how well they fit the data. In a recent paper, we found that discretizing the functional integral as in Feynman's path-integrals, does not have a well-defined continuum limit. Instead, the limit depends on the discretization grid whose choice may strongly bias the results. In this paper, we demonstrate that sampling the grid points, instead of keeping them fixed, also changes the functional integral limit and rather helps to overcome the bias considerably. We provide an efficient algorithm for doing the sampling and show how the density of the grid points acts now as a default model with a significantly reduced biasing effect. The remaining bias depends mainly on the width of the grid density, so we go one step further and average also over densities of different widths. For a certain class of densities, including Gaussian and exponential ones, this width averaging can be done analytically, eliminating the need to specify this parameter without introducing any computational overhead.Comment: 10 pages, 10 figure

    Connecting Tikhonov regularization to the maximum entropy method for the analytic continuation of quantum Monte Carlo data

    Full text link
    Analytic continuation is an essential step in extracting information about the dynamical properties of physical systems from quantum Monte Carlo (QMC) simulations. Different methods for analytic continuation have been proposed and are still being developed. This paper explores a regularization method based on the repeated application of Tikhonov regularization under the discrepancy principle. The method can be readily implemented in any linear algebra package and gives results surprisingly close to the maximum entropy method (MaxEnt). We analyze the method in detail and demonstrate its connection to MaxEnt. In addition, we provide a straightforward method for estimating the noise level of QMC data, which is helpful for practical applications of the discrepancy principle when the noise level is not known reliably.Comment: 12 pages, 10 figure

    Robust Extraction of Thermal Observables from State Sampling and Real-Time Dynamics on Quantum Computers

    Get PDF
    Simulating properties of quantum materials is one of the most promising applications of quantum computation, both near- and long-term. While real-time dynamics can be straightforwardly implemented, the finite temperature ensemble involves non-unitary operators that render an implementation on a near-term quantum computer extremely challenging. Recently, Lu, Bañuls and Cirac \cite{Lu2021} suggested a "time-series quantum Monte Carlo method" which circumvents this problem by extracting finite temperature properties from real-time simulations via Wick's rotation and Monte Carlo sampling of easily preparable states. In this paper, we address the challenges associated with the practical applications of this method, using the two-dimensional transverse field Ising model as a testbed. We demonstrate that estimating Boltzmann weights via Wick's rotation is very sensitive to time-domain truncation and statistical shot noise. To alleviate this problem, we introduce a technique that imposes constraints on the density of states, most notably its non-negativity, and show that this way, we can reliably extract Boltzmann weights from noisy time series. In addition, we show how to reduce the statistical errors of Monte Carlo sampling via a reweighted version of the Wolff cluster algorithm. Our work enables the implementation of the time-series algorithm on present-day quantum computers to study finite temperature properties of many-body quantum systems

    Measuring the Loschmidt amplitude for finite-energy properties of the Fermi-Hubbard model on an ion-trap quantum computer

    Full text link
    Calculating the equilibrium properties of condensed matter systems is one of the promising applications of near-term quantum computing. Recently, hybrid quantum-classical time-series algorithms have been proposed to efficiently extract these properties from a measurement of the Loschmidt amplitude ψeiH^tψ\langle \psi| e^{-i \hat H t}|\psi \rangle from initial states ψ|\psi\rangle and a time evolution under the Hamiltonian H^\hat H up to short times tt. In this work, we study the operation of this algorithm on a present-day quantum computer. Specifically, we measure the Loschmidt amplitude for the Fermi-Hubbard model on a 1616-site ladder geometry (32 orbitals) on the Quantinuum H2-1 trapped-ion device. We assess the effect of noise on the Loschmidt amplitude and implement algorithm-specific error mitigation techniques. By using a thus-motivated error model, we numerically analyze the influence of noise on the full operation of the quantum-classical algorithm by measuring expectation values of local observables at finite energies. Finally, we estimate the resources needed for scaling up the algorithm.Comment: 18 pages, 12 figure

    Impact of opioid-free analgesia on pain severity and patient satisfaction after discharge from surgery: multispecialty, prospective cohort study in 25 countries

    Get PDF
    Background: Balancing opioid stewardship and the need for adequate analgesia following discharge after surgery is challenging. This study aimed to compare the outcomes for patients discharged with opioid versus opioid-free analgesia after common surgical procedures.Methods: This international, multicentre, prospective cohort study collected data from patients undergoing common acute and elective general surgical, urological, gynaecological, and orthopaedic procedures. The primary outcomes were patient-reported time in severe pain measured on a numerical analogue scale from 0 to 100% and patient-reported satisfaction with pain relief during the first week following discharge. Data were collected by in-hospital chart review and patient telephone interview 1 week after discharge.Results: The study recruited 4273 patients from 144 centres in 25 countries; 1311 patients (30.7%) were prescribed opioid analgesia at discharge. Patients reported being in severe pain for 10 (i.q.r. 1-30)% of the first week after discharge and rated satisfaction with analgesia as 90 (i.q.r. 80-100) of 100. After adjustment for confounders, opioid analgesia on discharge was independently associated with increased pain severity (risk ratio 1.52, 95% c.i. 1.31 to 1.76; P &lt; 0.001) and re-presentation to healthcare providers owing to side-effects of medication (OR 2.38, 95% c.i. 1.36 to 4.17; P = 0.004), but not with satisfaction with analgesia (beta coefficient 0.92, 95% c.i. -1.52 to 3.36; P = 0.468) compared with opioid-free analgesia. Although opioid prescribing varied greatly between high-income and low- and middle-income countries, patient-reported outcomes did not.Conclusion: Opioid analgesia prescription on surgical discharge is associated with a higher risk of re-presentation owing to side-effects of medication and increased patient-reported pain, but not with changes in patient-reported satisfaction. Opioid-free discharge analgesia should be adopted routinely

    Stochastic Analytic Continuation: A Bayesian Approach

    No full text
    The stochastic sampling method (StochS) is used for the analytic continuation of quantum Monte Carlo data from the imaginary axis to the real axis. Compared to the maximum entropy method, StochS does not have explicit parameters, and one would expect the results to be unbiased. We present a very efficient algorithm for performing StochS and use it to study the effect of the discretization grid. Surprisingly, we find that the grid affects the results of StochS acting as an implicit default model. We provide a recipe for choosing a reliable StochS grid.To reduce the effect of the grid, we extend StochS into a gridless method (gStochS) by sampling the grid points from a default model instead of having them fixed. The effect of the default model is much reduced in gStochS compared to StochS and depends mainly on its width rather than its shape. The proper width can then be chosen using a simple recipe like we did in StochS.Finally, to avoid fixing the width, we go one step further and extend gStochS to sample over a whole class of default models with different widths. The extended method (eStochS) is then able to automatically relocate the grid points and concentrate them in the important region. Test cases show that eStochS gives good results resolving sharp features in the spectrum without the need for fine tuning a default model
    corecore