15 research outputs found
A weakly convergent fully inexact Douglas-Rachford method with relative error tolerance
Douglas-Rachford method is a splitting algorithm for finding a zero of the
sum of two maximal monotone operators. Each of its iterations requires the
sequential solution of two proximal subproblems. The aim of this work is to
present a fully inexact version of Douglas-Rachford method wherein both
proximal subproblems are solved approximately within a relative error
tolerance. We also present a semi-inexact variant in which the first subproblem
is solved exactly and the second one inexactly. We prove that both methods
generate sequences weakly convergent to the solution of the underlying
inclusion problem, if any
Order preserving and order reversing operators on the class of convex functions in Banach spaces
A remarkable result by S. Artstein-Avidan and V. Milman states that, up to
pre-composition with affine operators, addition of affine functionals, and
multiplication by positive scalars, the only fully order preserving mapping
acting on the class of lower semicontinuous proper convex functions defined on
is the identity operator, and the only fully order reversing one
acting on the same set is the Fenchel conjugation. Here fully order preserving
(reversing) mappings are understood to be those which preserve (reverse) the
pointwise order among convex functions, are invertible, and such that their
inverses also preserve (reverse) such order. In this paper we establish a
suitable extension of these results to order preserving and order reversing
operators acting on the class of lower semicontinous proper convex functions
defined on arbitrary infinite dimensional Banach spaces.Comment: 19 pages; Journal of Functional Analysis, accepted for publication; a
better presentation of certain parts; minor corrections and modifications;
references and thanks were adde
Newtonâs method for multicriteria optimization
We propose an extension of Newton's method for unconstrained multiobjective optimization (multicriteria optimization). This method does not use a priori chosen weighting factors or any other form of a priori ranking or ordering information for the different objective functions. Newton's direction at each iterate is obtained by minimizing the max-ordering scalarization of the variations on the quadratic approximations of the objective functions. The objective functions are assumed to be twice continuously differentiable and locally strongly convex. Under these hypotheses, the method, as in the classical case, is locally superlinear convergent to optimal points. Again as in the scalar case, if the second derivatives are Lipschitz continuous, the rate of convergence is quadratic. Our convergence analysis uses a Kantorovich-like technique. As a byproduct, existence of optima is obtained under semilocal assumptions
Algebraic rules for computing the regularization parameter of the LevenbergâMarquardt method
CNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICOFAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOPRONEX - PROGRAMA DE APOIO A NĂCLEOS DE EXCELĂNCIAFAPERJ - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DO RIO DE JANEIROThis paper presents a class of Levenberg-Marquardt methods for solving the nonlinear least-squares problem. Explicit algebraic rules for computing the regularization parameter are devised. In addition, convergence properties of this class of methods are analyzed. We prove that all accumulation points of the generated sequence are stationary. Moreover, q-quadratic convergence for the zero-residual problem is obtained under an error bound condition. Illustrative numerical experiments with encouraging results are presented.This paper presents a class of LevenbergâMarquardt methods for solving the nonlinear least-squares problem. Explicit algebraic rules for computing the regularization parameter are devised. In addition, convergence properties of this class of methods are analyzed. We prove that all accumulation points of the generated sequence are stationary. Moreover, q-quadratic convergence for the zero-residual problem is obtained under an error bound condition. Illustrative numerical experiments with encouraging results are presented.653723751CNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICOFAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOPRONEX - PROGRAMA DE APOIO A NĂCLEOS DE EXCELĂNCIAFAPERJ - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DO RIO DE JANEIROCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICOFAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOPRONEX - PROGRAMA DE APOIO A NĂCLEOS DE EXCELĂNCIAFAPERJ - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DO RIO DE JANEIRO477611/2013-3; 308957/2014-8; 304032/2010-7; 302962/2011-5; 474944/2010-72013/05475-7; 2013/07375-0sem informaçãoE-26/102.940/201
Algebraic rules for quadratic regularization of Newton's method
CNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICOFAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOPRONEX - PROGRAMA DE APOIO A NĂCLEOS DE EXCELĂNCIAFAPERJ - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DO RIO DE JANEIROIn this work we propose a class of quasi-Newton methods to minimize a twice differentiable function with Lipschitz continuous Hessian. These methods are based on the quadratic regularization of Newton's method, with algebraic explicit rules for computing the regularizing parameter. The convergence properties of this class of methods are analysed. We show that if the sequence generated by the algorithm converges then its limit point is stationary. We also establish local quadratic convergence in a neighborhood of a stationary point with positive definite Hessian. Encouraging numerical experiments are presented.In this work we propose a class of quasi-Newton methods to minimize a twice differentiable function with Lipschitz continuous Hessian. These methods are based on the quadratic regularization of Newtonâs method, with algebraic explicit rules for computing the regularizing parameter. The convergence properties of this class of methods are analysed. We show that if the sequence generated by the algorithm converges then its limit point is stationary. We also establish local quadratic convergence in a neighborhood of a stationary point with positive deďŹnite Hessian. Encouraging numerical experiments are presented.602343376CNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICOFAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOPRONEX - PROGRAMA DE APOIO A NĂCLEOS DE EXCELĂNCIAFAPERJ - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DO RIO DE JANEIROCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICOFAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOPRONEX - PROGRAMA DE APOIO A NĂCLEOS DE EXCELĂNCIAFAPERJ - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DO RIO DE JANEIROCNPq [307714/2011-0, 477611/2013-3, 304032/2010-7, 302962/2011-5, 474996/2013-1]sem informaçãosem informaçãosem informaçãosem informaçã
A rst-order block-decomposition method for solving two-easy-block structured semidenite programs
Abstract In this paper, we consider a rst-order block-decomposition method for minimizing the sum of a convex dierentiable function with Lipschitz continuous gradient, and two other proper closed convex (possibly, nonsmooth) functions with easily computable resolvents. The method presented contains two important ingredients from a computational point of view, namely: an adaptive choice of stepsize for performing an extragradient step; and the use of a scaling factor to balance the blocks. We then specialize the method to the context of conic semidenite programming (SDP) problems consisting of two easy blocks of constraints. Without putting them in standard form, we show that four important classes of graph-related conic SDP problems automatically possess the above two-easy-block structure, namely: SDPs for θ-functions and θ+-functions of graph stable set problems, and SDP relaxations of binary integer quadratic and frequency assignment problems. Finally, we present computational results on the aforementioned classes of SDPs showing that our method outperforms the three most competitive codes for large-scale conic semidenite programs, namely: the boundary point (BP) method introduced by Povh et al., a Newton-CG augmented Lagrangian method, called SDPNAL, by Zhao et al., and a variant of the BP method, called the SPDAD method, by Wen et al