7,350 research outputs found
RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates
Proximal splitting algorithms are well suited to solving large-scale
nonsmooth optimization problems, in particular those arising in machine
learning. We propose a new primal-dual algorithm, in which the dual update is
randomized; equivalently, the proximity operator of one of the function in the
problem is replaced by a stochastic oracle. For instance, some randomly chosen
dual variables, instead of all, are updated at each iteration. Or, the
proximity operator of a function is called with some small probability only. A
nonsmooth variance-reduction technique is implemented so that the algorithm
finds an exact minimizer of the general problem involving smooth and nonsmooth
functions, possibly composed with linear operators. We derive linear
convergence results in presence of strong convexity; these results are new even
in the deterministic case, when our algorithms reverts to the recently proposed
Primal-Dual Davis-Yin algorithm. Some randomized algorithms of the literature
are also recovered as particular cases (e.g., Point-SAGA). But our
randomization technique is general and encompasses many unbiased mechanisms
beyond sampling and probabilistic updates, including compression. Since the
convergence speed depends on the slowest among the primal and dual contraction
mechanisms, the iteration complexity might remain the same when randomness is
used. On the other hand, the computation complexity can be significantly
reduced. Overall, randomness helps getting faster algorithms. This has long
been known for stochastic-gradient-type algorithms, and our work shows that
this fully applies in the more general primal-dual setting as well
Reflection methods for user-friendly submodular optimization
Recently, it has become evident that submodularity naturally captures widely
occurring concepts in machine learning, signal processing and computer vision.
Consequently, there is need for efficient optimization procedures for
submodular functions, especially for minimization problems. While general
submodular minimization is challenging, we propose a new method that exploits
existing decomposability of submodular functions. In contrast to previous
approaches, our method is neither approximate, nor impractical, nor does it
need any cumbersome parameter tuning. Moreover, it is easy to implement and
parallelize. A key component of our method is a formulation of the discrete
submodular minimization problem as a continuous best approximation problem that
is solved through a sequence of reflections, and its solution can be easily
thresholded to obtain an optimal discrete solution. This method solves both the
continuous and discrete formulations of the problem, and therefore has
applications in learning, inference, and reconstruction. In our experiments, we
illustrate the benefits of our method on two image segmentation tasks.Comment: Neural Information Processing Systems (NIPS), \'Etats-Unis (2013
- …