71,142 research outputs found

    Alternative sampling for variational quantum Monte Carlo

    Full text link
    Expectation values of physical quantities may accurately be obtained by the evaluation of integrals within Many-Body Quantum mechanics, and these multi-dimensional integrals may be estimated using Monte Carlo methods. In a previous publication it has been shown that for the simplest, most commonly applied strategy in continuum Quantum Monte Carlo, the random error in the resulting estimates is not well controlled. At best the Central Limit theorem is valid in its weakest form, and at worst it is invalid and replaced by an alternative Generalised Central Limit theorem and non-Normal random error. In both cases the random error is not controlled. Here we consider a new `residual sampling strategy' that reintroduces the Central Limit Theorem in its strongest form, and provides full control of the random error in estimates. Estimates of the total energy and the variance of the local energy within Variational Monte Carlo are considered in detail, and the approach presented may be generalised to expectation values of other operators, and to other variants of the Quantum Monte Carlo method.Comment: 14 pages, 9 figure

    Conditional predictive inference post model selection

    Full text link
    We give a finite-sample analysis of predictive inference procedures after model selection in regression with random design. The analysis is focused on a statistically challenging scenario where the number of potentially important explanatory variables can be infinite, where no regularity conditions are imposed on unknown parameters, where the number of explanatory variables in a "good" model can be of the same order as sample size and where the number of candidate models can be of larger order than sample size. The performance of inference procedures is evaluated conditional on the training sample. Under weak conditions on only the number of candidate models and on their complexity, and uniformly over all data-generating processes under consideration, we show that a certain prediction interval is approximately valid and short with high probability in finite samples, in the sense that its actual coverage probability is close to the nominal one and in the sense that its length is close to the length of an infeasible interval that is constructed by actually knowing the "best" candidate model. Similar results are shown to hold for predictive inference procedures other than prediction intervals like, for example, tests of whether a future response will lie above or below a given threshold.Comment: Published in at http://dx.doi.org/10.1214/08-AOS660 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Statistical topological data analysis using persistence landscapes

    Full text link
    We define a new topological summary for data that we call the persistence landscape. Since this summary lies in a vector space, it is easy to combine with tools from statistics and machine learning, in contrast to the standard topological summaries. Viewed as a random variable with values in a Banach space, this summary obeys a strong law of large numbers and a central limit theorem. We show how a number of standard statistical tests can be used for statistical inference using this summary. We also prove that this summary is stable and that it can be used to provide lower bounds for the bottleneck and Wasserstein distances.Comment: 26 pages, final version, to appear in Journal of Machine Learning Research, includes two additional examples not in the journal version: random geometric complexes and Erdos-Renyi random clique complexe
    • …
    corecore