18,642 research outputs found
Adaptive Alternating Minimization Algorithms
The classical alternating minimization (or projection) algorithm has been
successful in the context of solving optimization problems over two variables.
The iterative nature and simplicity of the algorithm has led to its application
to many areas such as signal processing, information theory, control, and
finance. A general set of sufficient conditions for the convergence and
correctness of the algorithm is quite well-known when the underlying problem
parameters are fixed. In many practical situations, however, the underlying
problem parameters are changing over time, and the use of an adaptive algorithm
is more appropriate. In this paper, we study such an adaptive version of the
alternating minimization algorithm. As a main result of this paper, we provide
a general set of sufficient conditions for the convergence and correctness of
the adaptive algorithm. Perhaps surprisingly, these conditions seem to be the
minimal ones one would expect in such an adaptive setting. We present
applications of our results to adaptive decomposition of mixtures, adaptive
log-optimal portfolio selection, and adaptive filter design.Comment: 12 pages, to appear in IEEE Transactions on Information Theor
On accelerated alternating minimization
Alternating minimization (AM) optimization algorithms have been known for a long time and are of importance in machine learning problems, among which we are mostly motivated by approximating optimal transport distances. AM algorithms assume that the decision variable is divided into several blocks and minimization in each block can be done explicitly or cheaply with high accuracy. The ubiquitous Sinkhorn's algorithm can be seen as an alternating minimization algorithm for the dual to the entropy-regularized optimal transport problem. We introduce an accelerated alternating minimization method with a convergence rate, where is the iteration counter. This improves over known bound for general AM methods and for the Sinkhorn's algorithm. Moreover, our algorithm converges faster than gradient-type methods in practice as it is free of the choice of the step-size and is adaptive to the local smoothness of the problem. We show that the proposed method is primal-dual, meaning that if we apply it to a dual problem, we can reconstruct the solution of the primal problem with the same convergence rate. We apply our method to the entropy regularized optimal transport problem and show experimentally, that it outperforms Sinkhorn's algorithm
- …