47 research outputs found

    The Boosted DC Algorithm for Linearly Constrained DC Programming

    Get PDF
    The Boosted Difference of Convex functions Algorithm (BDCA) has been recently introduced to accelerate the performance of the classical Difference of Convex functions Algorithm (DCA). This acceleration is achieved thanks to an extrapolation step from the point computed by DCA via a line search procedure. In this work, we propose an extension of BDCA that can be applied to difference of convex functions programs with linear constraints, and prove that every cluster point of the sequence generated by this algorithm is a Karush–Kuhn–Tucker point of the problem if the feasible set has a Slater point. When the objective function is quadratic, we prove that any sequence generated by the algorithm is bounded and R-linearly (geometrically) convergent. Finally, we present some numerical experiments where we compare the performance of DCA and BDCA on some challenging problems: to test the copositivity of a given matrix, to solve one-norm and infinity-norm trust-region subproblems, and to solve piecewise quadratic problems with box constraints. Our numerical results demonstrate that this new extension of BDCA outperforms DCA.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. FJAA and RC were partially supported by the Ministry of Science, Innovation and Universities of Spain and the European Regional Development Fund (ERDF) of the European Commission (PGC2018-097960-B-C22), and by the Generalitat Valenciana (AICO/2021/165). PTV was supported by Vietnam Ministry of Education and Training Project hosting by the University of Technology and Education, Ho Chi Minh City Vietnam (2023-2024)

    Optimizing radial basis functions by D.C. programming and its use in direct search for global derivative-free optimization

    Get PDF
    In this paper we address the global optimization of functions subject to bound and linear constraints without using derivatives of the objective function. We investigate the use of derivative-free models based on radial basis functions (RBFs) in the search step of direct-search methods of directional type. We also study the application of algorithms based on difference of convex (d.c.) functions programming to solve the resulting subproblems which consist of the minimization of the RBF models subject to simple bounds on the variables. Extensive numerical results are reported with a test set of bound and linearly constrained problems

    On the convergence analysis of DCA

    Full text link
    In this paper, we propose a clean and general proof framework to establish the convergence analysis of the Difference-of-Convex (DC) programming algorithm (DCA) for both standard DC program and convex constrained DC program. We first discuss suitable assumptions for the well-definiteness of DCA. Then, we focus on the convergence analysis of DCA, in particular, the global convergence of the sequence {xk}\{x^k\} generated by DCA under the Lojasiewicz subgradient inequality and the Kurdyka-Lojasiewicz property respectively. Moreover, the convergence rate for the sequences {f(xk)}\{f(x^k)\} and {∥xk−x∗∥}\{\|x^k-x^*\|\} are also investigated. We hope that the proof framework presented in this article will be a useful tool to conveniently establish the convergence analysis for many variants of DCA and new DCA-type algorithms

    Local convergence of a sequential quadratic programming method for a class of nonsmooth nonconvex objectives

    Full text link
    A sequential quadratic programming (SQP) algorithm is designed for nonsmooth optimization problems with upper-C^2 objective functions. Upper-C^2 functions are locally equivalent to difference-of-convex (DC) functions with smooth convex parts. They arise naturally in many applications such as certain classes of solutions to parametric optimization problems, e.g., recourse of stochastic programming, and projection onto closed sets. The proposed algorithm conducts line search and adopts an exact penalty merit function. The potential inconsistency due to the linearization of constraints are addressed through relaxation, similar to that of Sl_1QP. We show that the algorithm is globally convergent under reasonable assumptions. Moreover, we study the local convergence behavior of the algorithm under additional assumptions of Kurdyka-{\L}ojasiewicz (KL) properties, which have been applied to many nonsmooth optimization problems. Due to the nonconvex nature of the problems, a special potential function is used to analyze local convergence. We show that under acceptable assumptions, upper bounds on local convergence can be proven. Additionally, we show that for a large number of optimization problems with upper-C^2 objectives, their corresponding potential functions are indeed KL functions. Numerical experiment is performed with a power grid optimization problem that is consistent with the assumptions and analysis in this paper

    An adaptive sampling sequential quadratic programming method for nonsmooth stochastic optimization with upper-C2\mathcal{C}^2 objective

    Full text link
    We propose an optimization algorithm that incorporates adaptive sampling for stochastic nonsmooth nonconvex optimization problems with upper-C2\mathcal{C}^2 objective functions. Upper-C2\mathcal{C}^2 is a weakly concave property that exists naturally in many applications, particularly certain classes of solutions to parametric optimization problems, e.g., recourse of stochastic programming and projection into closed sets. Our algorithm is a stochastic sequential quadratic programming (SQP) method extended to nonsmooth problems with upperC2\mathcal{C}^2 objectives and is globally convergent in expectation with bounded algorithmic parameters. The capabilities of our algorithm are demonstrated by solving a joint production, pricing and shipment problem, as well as a realistic optimal power flow problem as used in current power grid industry practice.Comment: arXiv admin note: text overlap with arXiv:2204.0963
    corecore