377 research outputs found

    Modelling conditional probabilities with Riemann-Theta Boltzmann Machines

    Full text link
    The probability density function for the visible sector of a Riemann-Theta Boltzmann machine can be taken conditional on a subset of the visible units. We derive that the corresponding conditional density function is given by a reparameterization of the Riemann-Theta Boltzmann machine modelling the original probability density function. Therefore the conditional densities can be directly inferred from the Riemann-Theta Boltzmann machine.Comment: 7 pages, 3 figures, in proceedings of the 19th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2019

    On computing high-dimensional Riemann theta functions

    Get PDF
    Riemann theta functions play a crucial role in the field of nonlinear Fourier analysis, where they are used to realize inverse nonlinear Fourier transforms for periodic signals. The practical applicability of this approach has however been limited since Riemann theta functions are multi-dimensional Fourier series whose computation suffers from the curse of dimensionality. In this paper, we investigate several new approaches to compute Riemann theta functions with the goal of unlocking their practical potential. Our first contributions are novel theoretical lower and upper bounds on the series truncation error. These bounds allow us to rule out several of the existing approaches for the high-dimension regime. We then propose to consider low-rank tensor and hyperbolic cross based techniques. We first examine a tensor-train based algorithm which utilizes the popular scaling and squaring approach. We show theoretically that this approach cannot break the curse of dimensionality. Finally, we investigate two other tensor-train based methods numerically and compare them to hyperbolic cross based methods. Using finite-genus solutions of the Korteweg–de Vries (KdV) and nonlinear Schrödinger equation (NLS) equations, we demonstrate the accuracy of the proposed algorithms. The tensor-train based algorithms are shown to work well for low genus solutions with real arguments but are limited by memory for higher genera. The hyperbolic cross based algorithm also achieves high accuracy for low genus solutions. Its novelty is the ability to feasibly compute moderately accurate solutions (a relative error of magnitude 0.01) for high dimensions (up to 60). It therefore enables the computation of complex inverse nonlinear Fourier transforms that were so far out of reach

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    Determining probability density functions with adiabatic quantum computing

    Full text link
    A reliable determination of probability density functions from data samples is still a relevant topic in scientific applications. In this work we investigate the possibility of defining an algorithm for density function estimation using adiabatic quantum computing. Starting from a sample of a one-dimensional distribution, we define a classical-to-quantum data embedding procedure which maps the empirical cumulative distribution function of the sample into time dependent Hamiltonian using adiabatic quantum evolution. The obtained Hamiltonian is then projected into a quantum circuit using the time evolution operator. Finally, the probability density function of the sample is obtained using quantum hardware differentiation through the parameter shift rule algorithm. We present successful numerical results for predefined known distributions and high-energy physics Monte Carlo simulation samples.Comment: 7 pages, 3 figure

    A review of Monte Carlo simulations of polymers with PERM

    Full text link
    In this review, we describe applications of the pruned-enriched Rosenbluth method (PERM), a sequential Monte Carlo algorithm with resampling, to various problems in polymer physics. PERM produces samples according to any given prescribed weight distribution, by growing configurations step by step with controlled bias, and correcting "bad" configurations by "population control". The latter is implemented, in contrast to other population based algorithms like e.g. genetic algorithms, by depth-first recursion which avoids storing all members of the population at the same time in computer memory. The problems we discuss all concern single polymers (with one exception), but under various conditions: Homopolymers in good solvents and at the Θ\Theta point, semi-stiff polymers, polymers in confining geometries, stretched polymers undergoing a forced globule-linear transition, star polymers, bottle brushes, lattice animals as a model for randomly branched polymers, DNA melting, and finally -- as the only system at low temperatures, lattice heteropolymers as simple models for protein folding. PERM is for some of these problems the method of choice, but it can also fail. We discuss how to recognize when a result is reliable, and we discuss also some types of bias that can be crucial in guiding the growth into the right directions.Comment: 29 pages, 26 figures, to be published in J. Stat. Phys. (2011

    Restricted Boltzmann machine representation for the groundstate and excited states of Kitaev Honeycomb model

    Get PDF
    Abstract In this work, the capability of restricted Boltzmann machines (RBMs) to find solutions for the Kitaev honeycomb model with periodic boundary conditions is investigated. The measured groundstate energy of the system is compared and, for small lattice sizes (e.g. 3 × 3 with 18 spinors), shown to agree with the analytically derived value of the energy up to a deviation of 0.09 % . Moreover, the wave-functions we find have 99.89 % overlap with the exact ground state wave-functions. Furthermore, the possibility of realizing anyons in the RBM is discussed and an algorithm is given to build these anyonic excitations and braid them for possible future applications in quantum computation. Using the correspondence between topological field theories in (2 + 1)d and 2d conformal field theories, we propose an identification between our RBM states with the Moore-Read state and conformal blocks of the 2d Ising model.</jats:p

    Thermodynamics of the QCD plasma and the large-N limit

    Get PDF
    The equilibrium thermodynamic properties of the SU(N) plasma at finite temperature are studied non-perturbatively in the large-N limit, via lattice simulations. We present high-precision numerical results for the pressure, trace of the energy-momentum tensor, energy density and entropy density of SU(N) Yang-Mills theories with N=3, 4, 5, 6 and 8 colors, in a temperature range from 0.8T_c to 3.4T_c (where T_c denotes the critical deconfinement temperature). The results, normalized according to the number of gluons, show a very mild dependence on N, supporting the idea that the dynamics of the strongly-interacting QCD plasma could admit a description based on large-N models. We compare our numerical data with general expectations about the thermal behavior of the deconfined gluon plasma and with various theoretical descriptions, including, in particular, the improved holographic QCD model recently proposed by Kiritsis and collaborators. We also comment on the relevance of an AdS/CFT description for the QCD plasma in a phenomenologically interesting temperature range where the system, while still strongly-coupled, approaches a `quasi-conformal' regime characterized by approximate scale invariance. Finally, we perform an extrapolation of our results to the N to \infty limit.Comment: 1+38 pages, 13 eps figures; v2: added reference
    corecore