7,337 research outputs found
A population-based stochastic coordinate descent method
This paper addresses the problem of solving a bound constrained global optimization problem by a population-based stochastic coordinate descent method. To improve efficiency, a small subpopulation of points is randomly selected from the original population, at each iteration. The coordinate descent directions are based on the gradient computed at a special point of the subpopulation. This point could be the best point, the center point or the point with highest score. Preliminary numerical experiments are carried out to compare the performance of the tested variants. Based on the results obtained with the selected problems, we may conclude that the variants based on the point with highest score are more robust and the variants based on the best point less robust, although they win on efficiency but only for the simpler and easy to solve problems.This work has been supported by FCT – Fundação para a Ciência e Tecnologia within the Projects Scope: UID/CEC/00319/2019 and UID/MAT/00013/2013
Non-convex Optimization for Machine Learning
A vast majority of machine learning algorithms train their models and perform
inference by solving optimization problems. In order to capture the learning
and prediction problems accurately, structural constraints such as sparsity or
low rank are frequently imposed or else the objective itself is designed to be
a non-convex function. This is especially true of algorithms that operate in
high-dimensional spaces or that train non-linear models such as tensor models
and deep networks.
The freedom to express the learning problem as a non-convex optimization
problem gives immense modeling power to the algorithm designer, but often such
problems are NP-hard to solve. A popular workaround to this has been to relax
non-convex problems to convex ones and use traditional methods to solve the
(convex) relaxed optimization problems. However this approach may be lossy and
nevertheless presents significant challenges for large scale optimization.
On the other hand, direct approaches to non-convex optimization have met with
resounding success in several domains and remain the methods of choice for the
practitioner, as they frequently outperform relaxation-based techniques -
popular heuristics include projected gradient descent and alternating
minimization. However, these are often poorly understood in terms of their
convergence and other properties.
This monograph presents a selection of recent advances that bridge a
long-standing gap in our understanding of these heuristics. The monograph will
lead the reader through several widely used non-convex optimization techniques,
as well as applications thereof. The goal of this monograph is to both,
introduce the rich literature in this area, as well as equip the reader with
the tools and techniques needed to analyze these simple procedures for
non-convex problems.Comment: The official publication is available from now publishers via
http://dx.doi.org/10.1561/220000005
- …