2,672 research outputs found
Linear Programming with Inequality Constraints via Entropic Perturbation
A dual convex programming approach to solving linear programs with inequality constraints through entropic perturbation is derived. The amount of perturbation required depends on the desired accuracy of the optimum. The dual program contains only non-positivity constraints. An ϵ-optimal solution to the linear program can be obtained effortlessly from the optimal solution of the dual program. Since cross-entropy minimization subject to linear inequality constraints is a special case of the perturbed linear program, the duality result becomes readily applicable. Many standard constrained optimization techniques can be specialized to solve the dual program. Such specializations, made possible by the simplicity of the constraints, significantly reduce the computational effort usually incurred by these methods. Immediate applications of the theory developed include an entropic path-following approach to solving linear semi-infinite programs with an infinite number of inequality constraints and the widely used entropy optimization models with linear inequality and/or equality constraints
On the finite termination of an entropy function based smoothing Newton method for vertical linear complementarity problems
By using a smooth entropy function to approximate the non-smooth max-type function, a vertical linear complementarity problem (VLCP) can be treated as a family of parameterized smooth equations. A Newton-type method with a testing procedure is proposed to solve such a system. We show that the proposed algorithm finds an exact solution of VLCP in a finite number of iterations, under some conditions milder than those assumed in literature. Some computational results are included to illustrate the potential of this approach.Newton method;Finite termination;Entropy function;Smoothing approximation;Vertical linear complementarity problems
Convergence of Entropic Schemes for Optimal Transport and Gradient Flows
Replacing positivity constraints by an entropy barrier is popular to
approximate solutions of linear programs. In the special case of the optimal
transport problem, this technique dates back to the early work of
Schr\"odinger. This approach has recently been used successfully to solve
optimal transport related problems in several applied fields such as imaging
sciences, machine learning and social sciences. The main reason for this
success is that, in contrast to linear programming solvers, the resulting
algorithms are highly parallelizable and take advantage of the geometry of the
computational grid (e.g. an image or a triangulated mesh). The first
contribution of this article is the proof of the -convergence of the
entropic regularized optimal transport problem towards the Monge-Kantorovich
problem for the squared Euclidean norm cost function. This implies in
particular the convergence of the optimal entropic regularized transport plan
towards an optimal transport plan as the entropy vanishes. Optimal transport
distances are also useful to define gradient flows as a limit of implicit Euler
steps according to the transportation distance. Our second contribution is a
proof that implicit steps according to the entropic regularized distance
converge towards the original gradient flow when both the step size and the
entropic penalty vanish (in some controlled way)
On the convergence of mirror descent beyond stochastic convex programming
In this paper, we examine the convergence of mirror descent in a class of
stochastic optimization problems that are not necessarily convex (or even
quasi-convex), and which we call variationally coherent. Since the standard
technique of "ergodic averaging" offers no tangible benefits beyond convex
programming, we focus directly on the algorithm's last generated sample (its
"last iterate"), and we show that it converges with probabiility if the
underlying problem is coherent. We further consider a localized version of
variational coherence which ensures local convergence of stochastic mirror
descent (SMD) with high probability. These results contribute to the landscape
of non-convex stochastic optimization by showing that (quasi-)convexity is not
essential for convergence to a global minimum: rather, variational coherence, a
much weaker requirement, suffices. Finally, building on the above, we reveal an
interesting insight regarding the convergence speed of SMD: in problems with
sharp minima (such as generic linear programs or concave minimization
problems), SMD reaches a minimum point in a finite number of steps (a.s.), even
in the presence of persistent gradient noise. This result is to be contrasted
with existing black-box convergence rate estimates that are only asymptotic.Comment: 30 pages, 5 figure
- …