Sampling from Gibbs distributions p(x)∝exp(−V(x)/ε) and
computing their log-partition function are fundamental tasks in statistics,
machine learning, and statistical physics. However, while efficient algorithms
are known for convex potentials V, the situation is much more difficult in
the non-convex case, where algorithms necessarily suffer from the curse of
dimensionality in the worst case. For optimization, which can be seen as a
low-temperature limit of sampling, it is known that smooth functions V allow
faster convergence rates. Specifically, for m-times differentiable functions
in d dimensions, the optimal rate for algorithms with n function
evaluations is known to be O(n−m/d), where the constant can potentially
depend on m,d and the function to be optimized. Hence, the curse of
dimensionality can be alleviated for smooth functions at least in terms of the
convergence rate. Recently, it has been shown that similarly fast rates can
also be achieved with polynomial runtime O(n3.5), where the exponent 3.5
is independent of m or d. Hence, it is natural to ask whether similar rates
for sampling and log-partition computation are possible, and whether they can
be realized in polynomial time with an exponent independent of m and d. We
show that the optimal rates for sampling and log-partition computation are
sometimes equal and sometimes faster than for optimization. We then analyze
various polynomial-time sampling algorithms, including an extension of a recent
promising optimization approach, and find that they sometimes exhibit
interesting behavior but no near-optimal rates. Our results also give further
insights on the relation between sampling, log-partition, and optimization
problems.Comment: Changes in v2: Minor corrections and formatting changes. Plots can be
reproduced using the code at
https://github.com/dholzmueller/sampling_experiment