109 research outputs found
(Global) Optimization: Historical notes and recent developments
Recent developments in (Global) Optimization are surveyed in this paper. We collected and commented quite a large number of recent references which, in our opinion, well represent the vivacity, deepness, and width of scope of current computational approaches and theoretical results about nonconvex optimization problems. Before the presentation of the recent developments, which are subdivided into two parts related to heuristic and exact approaches, respectively, we briefly sketch the origin of the discipline and observe what, from the initial attempts, survived, what was not considered at all as well as a few approaches which have been recently rediscovered, mostly in connection with machine learning
On globally solving nonconvex trust region subproblem via projected gradient method
The trust region subproblem (TRS) is to minimize a possibly nonconvex
quadratic function over a Euclidean ball. There are typically two cases for
(TRS), the so-called ``easy case'' and ``hard case''. Even in the ``easy
case'', the sequence generated by the classical projected gradient method (PG)
may converge to a saddle point at a sublinear local rate, when the initial
point is arbitrarily selected from a nonzero measure feasible set. To our
surprise, when applying (PG) to solve a cheap and possibly nonconvex
reformulation of (TRS), the generated sequence initialized with {\it any}
feasible point almost always converges to its global minimizer. The local
convergence rate is at least linear for the ``easy case'', without assuming
that we have possessed the information that the ``easy case'' holds. We also
consider how to use (PG) to globally solve equality-constrained (TRS).Comment: 19 pages, 3 figure
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
Mixed-integer Nonlinear Optimization: a hatchery for modern mathematics
The second MFO Oberwolfach Workshop on Mixed-Integer Nonlinear Programming (MINLP) took place between 2nd and 8th June 2019. MINLP refers to one of the hardest Mathematical Programming (MP) problem classes, involving both nonlinear functions as well as continuous and integer decision variables. MP is a formal language for describing optimization problems, and is traditionally part of Operations Research (OR), which is itself at the intersection of mathematics, computer science, engineering and econometrics. The scientific program has covered the three announced areas (hierarchies of approximation, mixed-integer nonlinear optimal control, and dealing with uncertainties) with a variety of tutorials, talks, short research announcements, and a special "open problems'' session
An accelerated first-order method with complexity analysis for solving cubic regularization subproblems
We propose a first-order method to solve the cubic regularization subproblem
(CRS) based on a novel reformulation. The reformulation is a constrained convex
optimization problem whose feasible region admits an easily computable
projection. Our reformulation requires computing the minimum eigenvalue of the
Hessian. To avoid the expensive computation of the exact minimum eigenvalue, we
develop a surrogate problem to the reformulation where the exact minimum
eigenvalue is replaced with an approximate one. We then apply first-order
methods such as the Nesterov's accelerated projected gradient method (APG) and
projected Barzilai-Borwein method to solve the surrogate problem. As our main
theoretical contribution, we show that when an -approximate minimum
eigenvalue is computed by the Lanczos method and the surrogate problem is
approximately solved by APG, our approach returns an -approximate
solution to CRS in matrix-vector multiplications
(where hides the logarithmic factors). Numerical experiments
show that our methods are comparable to and outperform the Krylov subspace
method in the easy and hard cases, respectively. We further implement our
methods as subproblem solvers of adaptive cubic regularization methods, and
numerical results show that our algorithms are comparable to the
state-of-the-art algorithms
A dual framework for low-rank tensor completion
One of the popular approaches for low-rank tensor completion is to use the
latent trace norm regularization. However, most existing works in this
direction learn a sparse combination of tensors. In this work, we fill this gap
by proposing a variant of the latent trace norm that helps in learning a
non-sparse combination of tensors. We develop a dual framework for solving the
low-rank tensor completion problem. We first show a novel characterization of
the dual solution space with an interesting factorization of the optimal
solution. Overall, the optimal solution is shown to lie on a Cartesian product
of Riemannian manifolds. Furthermore, we exploit the versatile Riemannian
optimization framework for proposing computationally efficient trust region
algorithm. The experiments illustrate the efficacy of the proposed algorithm on
several real-world datasets across applications.Comment: Aceepted to appear in Advances of Nueral Information Processing
Systems (NIPS), 2018. A shorter version appeared in the NIPS workshop on
Synergies in Geometric Data Analysis 201
- …