3,069 research outputs found
Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals
We consider the problem of sampling from a distribution governed by a
potential function. This work proposes an explicit score-based MCMC method that
is deterministic, resulting in a deterministic evolution for particles rather
than a stochastic differential equation evolution. The score term is given in
closed form by a regularized Wasserstein proximal, using a kernel convolution
that is approximated by sampling. We demonstrate fast convergence on various
problems and show improved dimensional dependence of mixing time bounds for the
case of Gaussian distributions compared to the unadjusted Langevin algorithm
(ULA) and the Metropolis-adjusted Langevin algorithm (MALA). We additionally
derive closed form expressions for the distributions at each iterate for
quadratic potential functions, characterizing the variance reduction. Empirical
results demonstrate that the particles behave in an organized manner, lying on
level set contours of the potential. Moreover, the posterior mean estimator of
the proposed method is shown to be closer to the maximum a-posteriori estimator
compared to ULA and MALA, in the context of Bayesian logistic regression
Unsupervised approaches based on optimal transport and convex analysis for inverse problems in imaging
Unsupervised deep learning approaches have recently become one of the crucial
research areas in imaging owing to their ability to learn expressive and
powerful reconstruction operators even when paired high-quality training data
is scarcely available. In this chapter, we review theoretically principled
unsupervised learning schemes for solving imaging inverse problems, with a
particular focus on methods rooted in optimal transport and convex analysis. We
begin by reviewing the optimal transport-based unsupervised approaches such as
the cycle-consistency-based models and learned adversarial regularization
methods, which have clear probabilistic interpretations. Subsequently, we give
an overview of a recent line of works on provably convergent learned
optimization algorithms applied to accelerate the solution of imaging inverse
problems, alongside their dedicated unsupervised training schemes. We also
survey a number of provably convergent plug-and-play algorithms (based on
gradient-step deep denoisers), which are among the most important and widely
applied unsupervised approaches for imaging problems. At the end of this
survey, we provide an overview of a few related unsupervised learning
frameworks that complement our focused schemes. Together with a detailed
survey, we provide an overview of the key mathematical results that underlie
the methods reviewed in the chapter to keep our discussion self-contained
Unsupervised approaches based on optimal transport and convex analysis for inverse problems in imaging
Unsupervised deep learning approaches have recently become one of the crucial research areas in imaging owing to their ability to learn expressive and powerful reconstruction operators even when paired high-quality training data is scarcely available. In this chapter, we review theoretically principled unsupervised learning schemes for solving imaging inverse problems, with a particular focus on methods rooted in optimal transport and convex analysis. We begin by reviewing the optimal transport-based unsupervised approaches such as the cycle-consistency-based models and learned adversarial regularization methods, which have clear probabilistic interpretations. Subsequently, we give an overview of a recent line of works on provably convergent learned optimization algorithms applied to accelerate the solution of imaging inverse problems, alongside their dedicated unsupervised training schemes. We also survey a number of provably convergent plug-and-play algorithms (based on gradient-step deep denoisers), which are among the most important and widely applied unsupervised approaches for imaging problems. At the end of this survey, we provide an overview of a few related unsupervised learning frameworks that complement our focused schemes. Together with a detailed survey, we provide an overview of the key mathematical results that underlie the methods reviewed in the chapter to keep our discussion self-contained
Provably Convergent Plug-and-Play Quasi-Newton Methods
Plug-and-Play (PnP) methods are a class of efficient iterative methods that
aim to combine data fidelity terms and deep denoisers using classical
optimization algorithms, such as ISTA or ADMM. Provable PnP methods are a
subclass of PnP methods with convergence guarantees, such as fixed point
convergence or convergence to critical points of some energy function. Many
existing provable PnP methods impose heavy restrictions on the denoiser or
fidelity function, such as non-expansiveness or strict convexity, respectively.
In this work, we propose a novel algorithmic approach incorporating
quasi-Newton steps into a provable PnP framework based on proximal denoisers,
resulting in greatly accelerated convergence while retaining light assumptions
on the denoiser. By characterizing the denoiser as the proximal operator of a
weakly convex function, we show that the fixed points of the proposed
quasi-Newton PnP algorithm are critical points of a weakly convex function.
Numerical experiments on image deblurring and super-resolution demonstrate
significantly faster convergence as compared to other provable PnP methods with
similar convergence results
Provably Convergent Plug-and-Play Quasi-Newton Methods
Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM, with applications in inverse problems and imaging. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate 2--8x faster convergence as compared to other provable PnP methods with similar reconstruction quality
Provably Convergent Plug-and-Play Quasi-Newton Methods
Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM, with applications in inverse problems and imaging. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate 2--8x faster convergence as compared to other provable PnP methods with similar reconstruction quality
Data-Driven Mirror Descent with Input-Convex Neural Networks
Learning-to-optimize is an emerging framework that seeks to speed up the
solution of certain optimization problems by leveraging training data. Learned
optimization solvers have been shown to outperform classical optimization
algorithms in terms of convergence speed, especially for convex problems. Many
existing data-driven optimization methods are based on parameterizing the
update step and learning the optimal parameters (typically scalars) from the
available data. We propose a novel functional parameterization approach for
learned convex optimization solvers based on the classical mirror descent (MD)
algorithm. Specifically, we seek to learn the optimal Bregman distance in MD by
modeling the underlying convex function using an input-convex neural network
(ICNN). The parameters of the ICNN are learned by minimizing the target
objective function evaluated at the MD iterate after a predetermined number of
iterations. The inverse of the mirror map is modeled approximately using
another neural network, as the exact inverse is intractable to compute. We
derive convergence rate bounds for the proposed learned mirror descent (LMD)
approach with an approximate inverse mirror map and perform extensive numerical
evaluation on various convex problems such as image inpainting, denoising,
learning a two-class support vector machine (SVM) classifier and a multi-class
linear classifier on fixed features
Robust Data-Driven Accelerated Mirror Descent
Learning-to-optimize is an emerging framework that leverages training data to
speed up the solution of certain optimization problems. One such approach is
based on the classical mirror descent algorithm, where the mirror map is
modelled using input-convex neural networks. In this work, we extend this
functional parameterization approach by introducing momentum into the
iterations, based on the classical accelerated mirror descent. Our approach
combines short-time accelerated convergence with stable long-time behavior. We
empirically demonstrate additional robustness with respect to multiple
parameters on denoising and deconvolution experiments.Comment: Note inconsistency with ICASSP paper for step-size choice in (4c) and
associated Alg. 1, this version is correct with step-size kt/
- …