1,184 research outputs found
On the asymptotic optimality of error bounds for some linear complementarity problems
We introduce strong B-matrices and strong B-Nekrasov matrices, for which some error bounds for linear complementarity problems are analyzed. In particular, it is proved that the bounds of GarcĂa-Esnaola and Peña (Appl. Math. Lett. 22, 1071–1075, 2009) and of (Numer. Algor. 72, 435–445, 2016) are asymptotically optimal for strong B-matrices and strong B-Nekrasov matrices, respectively. Other comparisons with a bound of Li and Li (Appl. Math. Lett. 57, 108–113, 2016) are performed
A regularization method for ill-posed bilevel optimization problems
We present a regularization method to approach a solution of the pessimistic
formulation of ill -posed bilevel problems . This allows to overcome the
difficulty arising from the non uniqueness of the lower level problems
solutions and responses. We prove existence of approximated solutions, give
convergence result using Hoffman-like assumptions. We end with objective value
error estimates.Comment: 19 page
Non-Asymptotic Convergence Analysis of Inexact Gradient Methods for Machine Learning Without Strong Convexity
Many recent applications in machine learning and data fitting call for the
algorithmic solution of structured smooth convex optimization problems.
Although the gradient descent method is a natural choice for this task, it
requires exact gradient computations and hence can be inefficient when the
problem size is large or the gradient is difficult to evaluate. Therefore,
there has been much interest in inexact gradient methods (IGMs), in which an
efficiently computable approximate gradient is used to perform the update in
each iteration. Currently, non-asymptotic linear convergence results for IGMs
are typically established under the assumption that the objective function is
strongly convex, which is not satisfied in many applications of interest; while
linear convergence results that do not require the strong convexity assumption
are usually asymptotic in nature. In this paper, we combine the best of these
two types of results and establish---under the standard assumption that the
gradient approximation errors decrease linearly to zero---the non-asymptotic
linear convergence of IGMs when applied to a class of structured convex
optimization problems. Such a class covers settings where the objective
function is not necessarily strongly convex and includes the least squares and
logistic regression problems. We believe that our techniques will find further
applications in the non-asymptotic convergence analysis of other first-order
methods
- …