18 research outputs found
Complexity of zigzag sampling algorithm for strongly log-concave distributions
We study the computational complexity of zigzag sampling algorithm for
strongly log-concave distributions. The zigzag process has the advantage of not
requiring time discretization for implementation, and that each proposed
bouncing event requires only one evaluation of partial derivative of the
potential, while its convergence rate is dimension independent. Using these
properties, we prove that the zigzag sampling algorithm achieves
error in chi-square divergence with a computational cost equivalent to
gradient evaluations in the regime under a warm
start assumption, where is the condition number and is the
dimension
Chain of Log-Concave Markov Chains
We introduce a theoretical framework for sampling from unnormalized densities
based on a smoothing scheme that uses an isotropic Gaussian kernel with a
single fixed noise scale. We prove one can decompose sampling from a density
(minimal assumptions made on the density) into a sequence of sampling from
log-concave conditional densities via accumulation of noisy measurements with
equal noise levels. Our construction is unique in that it keeps track of a
history of samples, making it non-Markovian as a whole, but it is lightweight
algorithmically as the history only shows up in the form of a running empirical
mean of samples. Our sampling algorithm generalizes walk-jump sampling (Saremi
& Hyv\"arinen, 2019). The "walk" phase becomes a (non-Markovian) chain of
(log-concave) Markov chains. The "jump" from the accumulated measurements is
obtained by empirical Bayes. We study our sampling algorithm quantitatively
using the 2-Wasserstein metric and compare it with various Langevin MCMC
algorithms. We also report a remarkable capacity of our algorithm to "tunnel"
between modes of a distribution
On the Posterior Distribution in Denoising: Application to Uncertainty Quantification
Denoisers play a central role in many applications, from noise suppression in
low-grade imaging sensors, to empowering score-based generative models. The
latter category of methods makes use of Tweedie's formula, which links the
posterior mean in Gaussian denoising (i.e., the minimum MSE denoiser) with the
score of the data distribution. Here, we derive a fundamental relation between
the higher-order central moments of the posterior distribution, and the
higher-order derivatives of the posterior mean. We harness this result for
uncertainty quantification of pre-trained denoisers. Particularly, we show how
to efficiently compute the principal components of the posterior distribution
for any desired region of an image, as well as to approximate the full marginal
distribution along those (or any other) one-dimensional directions. Our method
is fast and memory efficient, as it does not explicitly compute or store the
high-order moment tensors and it requires no training or fine tuning of the
denoiser. Code and examples are available on the project's webpage in
https://hilamanor.github.io/GaussianDenoisingPosterior/Comment: Code and examples are available on the project's webpage in
https://hilamanor.github.io/GaussianDenoisingPosterior