867 research outputs found
Provably efficient machine learning for quantum many-body problems
Classical machine learning (ML) provides a potentially powerful approach to
solving challenging quantum many-body problems in physics and chemistry.
However, the advantages of ML over more traditional methods have not been
firmly established. In this work, we prove that classical ML algorithms can
efficiently predict ground state properties of gapped Hamiltonians in finite
spatial dimensions, after learning from data obtained by measuring other
Hamiltonians in the same quantum phase of matter. In contrast, under widely
accepted complexity theory assumptions, classical algorithms that do not learn
from data cannot achieve the same guarantee. We also prove that classical ML
algorithms can efficiently classify a wide range of quantum phases of matter.
Our arguments are based on the concept of a classical shadow, a succinct
classical description of a many-body quantum state that can be constructed in
feasible quantum experiments and be used to predict many properties of the
state. Extensive numerical experiments corroborate our theoretical results in a
variety of scenarios, including Rydberg atom systems, 2D random Heisenberg
models, symmetry-protected topological phases, and topologically ordered
phases.Comment: 10 pages, 12 figures + 57 page appendi
Brittleness of Bayesian inference and new Selberg formulas
The incorporation of priors in the Optimal Uncertainty Quantification (OUQ)
framework \cite{OSSMO:2011} reveals brittleness in Bayesian inference; a model
may share an arbitrarily large number of finite-dimensional marginals with, or
be arbitrarily close (in Prokhorov or total variation metrics) to, the
data-generating distribution and still make the largest possible prediction
error after conditioning on an arbitrarily large number of samples. The initial
purpose of this paper is to unwrap this brittleness mechanism by providing (i)
a quantitative version of the Brittleness Theorem of \cite{BayesOUQ} and (ii) a
detailed and comprehensive analysis of its application to the revealing example
of estimating the mean of a random variable on the unit interval using
priors that exactly capture the distribution of an arbitrarily large number of
Hausdorff moments.
However, in doing so, we discovered that the free parameter associated with
Markov and Kre\u{\i}n's canonical representations of truncated Hausdorff
moments generates reproducing kernel identities corresponding to reproducing
kernel Hilbert spaces of polynomials.
Furthermore, these reproducing identities lead to biorthogonal systems of
Selberg integral formulas.
This process of discovery appears to be generic: whereas Karlin and Shapley
used Selberg's integral formula to first compute the volume of the Hausdorff
moment space (the polytope defined by the first moments of a probability
measure on the interval ), we observe that the computation of that
volume along with higher order moments of the uniform measure on the moment
space, using different finite-dimensional representations of subsets of the
infinite-dimensional set of probability measures on representing the
first moments, leads to families of equalities corresponding to classical
and new Selberg identities.Comment: 73 pages. Keywords: Bayesian inference, misspecification, robustness,
uncertainty quantification, optimal uncertainty quantification, reproducing
kernel Hilbert spaces (RKHS), Selberg integral formula
The Conformal Bootstrap: Theory, Numerical Techniques, and Applications
Conformal field theories have been long known to describe the fascinating
universal physics of scale invariant critical points. They describe continuous
phase transitions in fluids, magnets, and numerous other materials, while at
the same time sit at the heart of our modern understanding of quantum field
theory. For decades it has been a dream to study these intricate strongly
coupled theories nonperturbatively using symmetries and other consistency
conditions. This idea, called the conformal bootstrap, saw some successes in
two dimensions but it is only in the last ten years that it has been fully
realized in three, four, and other dimensions of interest. This renaissance has
been possible both due to significant analytical progress in understanding how
to set up the bootstrap equations and the development of numerical techniques
for finding or constraining their solutions. These developments have led to a
number of groundbreaking results, including world record determinations of
critical exponents and correlation function coefficients in the Ising and
models in three dimensions. This article will review these exciting
developments for newcomers to the bootstrap, giving an introduction to
conformal field theories and the theory of conformal blocks, describing
numerical techniques for the bootstrap based on convex optimization, and
summarizing in detail their applications to fixed points in three and four
dimensions with no or minimal supersymmetry.Comment: 81 pages, double column, 58 figures; v3: updated references, minor
typos correcte
Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation
Image annotation aims to annotate a given image with a variable number of
class labels corresponding to diverse visual concepts. In this paper, we
address two main issues in large-scale image annotation: 1) how to learn a rich
feature representation suitable for predicting a diverse set of visual concepts
ranging from object, scene to abstract concept; 2) how to annotate an image
with the optimal number of class labels. To address the first issue, we propose
a novel multi-scale deep model for extracting rich and discriminative features
capable of representing a wide range of visual concepts. Specifically, a novel
two-branch deep neural network architecture is proposed which comprises a very
deep main network branch and a companion feature fusion network branch designed
for fusing the multi-scale features computed from the main branch. The deep
model is also made multi-modal by taking noisy user-provided tags as model
input to complement the image input. For tackling the second issue, we
introduce a label quantity prediction auxiliary task to the main label
prediction task to explicitly estimate the optimal label number for a given
image. Extensive experiments are carried out on two large-scale image
annotation benchmark datasets and the results show that our method
significantly outperforms the state-of-the-art.Comment: Submited to IEEE TI
- …