658 research outputs found

    Exponential Time Complexity of the Permanent and the Tutte Polynomial

    Get PDF
    We show conditional lower bounds for well-studied #P-hard problems: (a) The number of satisfying assignments of a 2-CNF formula with n variables cannot be counted in time exp(o(n)), and the same is true for computing the number of all independent sets in an n-vertex graph. (b) The permanent of an n x n matrix with entries 0 and 1 cannot be computed in time exp(o(n)). (c) The Tutte polynomial of an n-vertex multigraph cannot be computed in time exp(o(n)) at most evaluation points (x,y) in the case of multigraphs, and it cannot be computed in time exp(o(n/polylog n)) in the case of simple graphs. Our lower bounds are relative to (variants of) the Exponential Time Hypothesis (ETH), which says that the satisfiability of n-variable 3-CNF formulas cannot be decided in time exp(o(n)). We relax this hypothesis by introducing its counting version #ETH, namely that the satisfying assignments cannot be counted in time exp(o(n)). In order to use #ETH for our lower bounds, we transfer the sparsification lemma for d-CNF formulas to the counting setting

    A Hybrid Quantum-Classical Hamiltonian Learning Algorithm

    Full text link
    Hamiltonian learning is crucial to the certification of quantum devices and quantum simulators. In this paper, we propose a hybrid quantum-classical Hamiltonian learning algorithm to find the coefficients of the Pauli operator components of the Hamiltonian. Its main subroutine is the practical log-partition function estimation algorithm, which is based on the minimization of the free energy of the system. Concretely, we devise a stochastic variational quantum eigensolver (SVQE) to diagonalize the Hamiltonians and then exploit the obtained eigenvalues to compute the free energy's global minimum using convex optimization. Our approach not only avoids the challenge of estimating von Neumann entropy in free energy minimization, but also reduces the quantum resources via importance sampling in Hamiltonian diagonalization, facilitating the implementation of our method on near-term quantum devices. Finally, we demonstrate our approach's validity by conducting numerical experiments with Hamiltonians of interest in quantum many-body physics.Comment: 24 page

    The Length of an SLE - Monte Carlo Studies

    Full text link
    The scaling limits of a variety of critical two-dimensional lattice models are equal to the Schramm-Loewner evolution (SLE) for a suitable value of the parameter kappa. These lattice models have a natural parametrization of their random curves given by the length of the curve. This parametrization (with suitable scaling) should provide a natural parametrization for the curves in the scaling limit. We conjecture that this parametrization is also given by a type of fractal variation along the curve, and present Monte Carlo simulations to support this conjecture. Then we show by simulations that if this fractal variation is used to parametrize the SLE, then the parametrized curves have the same distribution as the curves in the scaling limit of the lattice models with their natural parametrization.Comment: 18 pages, 10 figures. Version 2 replaced the use of "nu" for the "growth exponent" by 1/d_H, where d_H is the Hausdorff dimension. Various minor errors were also correcte

    Differentiable Programming Tensor Networks

    Full text link
    Differentiable programming is a fresh programming paradigm which composes parameterized algorithmic components and trains them using automatic differentiation (AD). The concept emerges from deep learning but is not only limited to training neural networks. We present theory and practice of programming tensor network algorithms in a fully differentiable way. By formulating the tensor network algorithm as a computation graph, one can compute higher order derivatives of the program accurately and efficiently using AD. We present essential techniques to differentiate through the tensor networks contractions, including stable AD for tensor decomposition and efficient backpropagation through fixed point iterations. As a demonstration, we compute the specific heat of the Ising model directly by taking the second order derivative of the free energy obtained in the tensor renormalization group calculation. Next, we perform gradient based variational optimization of infinite projected entangled pair states for quantum antiferromagnetic Heisenberg model and obtain start-of-the-art variational energy and magnetization with moderate efforts. Differentiable programming removes laborious human efforts in deriving and implementing analytical gradients for tensor network programs, which opens the door to more innovations in tensor network algorithms and applications.Comment: Typos corrected, discussion and refs added; revised version accepted for publication in PRX. Source code available at https://github.com/wangleiphy/tensorgra

    On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori Perturbations

    Full text link
    In this paper we describe how MAP inference can be used to sample efficiently from Gibbs distributions. Specifically, we provide means for drawing either approximate or unbiased samples from Gibbs' distributions by introducing low dimensional perturbations and solving the corresponding MAP assignments. Our approach also leads to new ways to derive lower bounds on partition functions. We demonstrate empirically that our method excels in the typical "high signal - high coupling" regime. The setting results in ragged energy landscapes that are challenging for alternative approaches to sampling and/or lower bounds
    • …
    corecore