44 research outputs found

    Geometrical inverse preconditioning for symmetric positive definite matrices

    Full text link
    We focus on inverse preconditioners based on minimizing F(X)=1cos(XA,I)F(X) = 1-\cos(XA,I), where XAXA is the preconditioned matrix and AA is symmetric and positive definite. We present and analyze gradient-type methods to minimize F(X)F(X) on a suitable compact set. For that we use the geometrical properties of the non-polyhedral cone of symmetric and positive definite matrices, and also the special properties of F(X)F(X) on the feasible set. Preliminary and encouraging numerical results are also presented in which dense and sparse approximations are included

    Properties of the delayed weighted gradient method

    Get PDF
    Funding Information: Roberto Andreani was financially supported by FAPESP (Projects 2013/05475-7 and 2017/18308-2) and CNPq (Project 301888/2017-5). Marcos Raydan was financially supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through the Project UIDB/MAT/00297/2020 (Centro de Matemática e Aplicações). Roberto Andreani would like to thank the Operations Research Group at CMA (Centro de Matemática e Aplicações), FCT, NOVA University of Lisbon, Portugal, for the hospitality during a two-week visit in December 2019. Funding Information: We would like to thank two anonymous referees for their comments and suggestions that helped us to improve the final version of this paper. Roberto Andreani was financially supported by FAPESP (Projects 2013/05475-7 and 2017/18308-2) and CNPq (Project 301888/2017-5). Marcos Raydan was financially supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through the Project UIDB/MAT/00297/2020 (Centro de Matemática e Aplicações). Roberto Andreani would like to thank the Operations Research Group at CMA (Centro de Matemática e Aplicações), FCT, NOVA University of Lisbon, Portugal, for the hospitality during a two-week visit in December 2019. Publisher Copyright: © 2020, Springer Science+Business Media, LLC, part of Springer Nature.The delayed weighted gradient method, recently introduced in Oviedo-Leon (Comput Optim Appl 74:729–746, 2019), is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior of the method, including its finite termination. We show that if the n× n real Hessian matrix of the quadratic function has only p< n distinct eigenvalues, then the method terminates in p iterations. We also establish an optimality condition, concerning the gradient norm, that motivates the future use of this novel scheme when low precision is required for the minimization of non-quadratic functions.authorsversionpublishe

    A metaheuristic penalty approach for the starting point in nonlinear programming

    Get PDF
    Solving nonlinear programming problems usually involve difficulties to obtain a starting point that produces convergence to a local feasible solution, for which the objective function value is sufficiently good. A novel approach is proposed, combining metaheuristic techniques with modern deterministic optimization schemes, with the aim to solve a sequence of penalized related problems to generate convenient starting points. The metaheuristic ideas are used to choose the penalty parameters associated with the constraints, and for each set of penalty parameters a deterministic scheme is used to evaluate a properly chosen metaheuristic merit function. Based on this starting-point approach, we describe two different strategies for solving the nonlinear programming problem. We illustrate the properties of the combined schemes on three nonlinear programming benchmark-test problems, and also on the well-known and hard-to-solve disk-packing problem, that possesses a huge amount of local-nonglobal solutions, obtaining encouraging results both in terms of optimality and feasibility.authorsversionpublishe

    A tool for data analysis

    Get PDF
    Funding Information: The first and third authors were financially supported by the Fundação para a Ciência e a Tecnologia, Portugal (Portuguese Foundation for Science and Technology) through the projects UIDB/MAT/00297/2020 , UIDP/MAT/00297/2020 (Centro de Matemática e Aplicações), and PTDC/CCI-BIO/4180/2020 . The second author was financially supported by the Forest Research Center, a research unit funded by Fundação para a Ciência e a Tecnologia (FCT), Portugal , through the project ( UIDB/00239/2020 ). Publisher Copyright: © 2023 The Author(s)Consider a graph with vertex set V and non-negative weights on the edges. For every subset of vertices S, define ϕ(S) to be the sum of the weights of edges with one vertex in S and the other in V∖S, minus the sum of the weights of the edges with both vertices in S. We consider the problem of finding S⊆V for which ϕ(S) is maximized. We call this combinatorial optimization problem the max-out min-in problem (MOMIP). In this paper we (i) present a linear 0/1 formulation and a quadratic unconstrained binary optimization formulation for MOMIP; (ii) prove that the problem is NP-hard; (iii) report results of computational experiments on simulated data to compare the performances of the two models; (iv) illustrate the applicability of MOMIP for two different topics in the context of data analysis, namely in the selection of variables in exploratory data analysis and in the identification of clusters in the context of cluster analysis; and (v) introduce a generalization of MOMIP that includes, as particular cases, the well-known weighted maximum cut problem and a novel problem related to independent dominant sets in graphs.publishersversionpublishe

    Preconditioned residual methods for solving steady fluid flows

    Get PDF
    We develop free-derivative preconditioned residual methods for solving nonlinear steady fluid flows. The new scheme is based on a variable implicit preconditioning technique associated to the globalized spectral residual method. It is adapted for computing in a numerical way the steady state of the bi-dimensional and incompressible Navier-Stokes equations (NSE). We use finite differences for the discretization and consider both the primary variables and the stream function-vorticity formulations of the problem. Our numerical results agree with the ones in the literature and show the robustness of our method for Reynolds numbers up to Re=5000

    SLiSeS: Subsampled Line Search Spectral Gradient Method for Finite Sums

    Full text link
    The spectral gradient method is known to be a powerful low-cost tool for solving large-scale optimization problems. In this paper, our goal is to exploit its advantages in the stochastic optimization framework, especially in the case of mini-batch subsampling that is often used in big data settings. To allow the spectral coefficient to properly explore the underlying approximate Hessian spectrum, we keep the same subsample for several iterations before subsampling again. We analyze the required algorithmic features and the conditions for almost sure convergence, and present initial numerical results that show the advantages of the proposed method

    On the Barzilai and Borwein choice of steplength for the gradient method

    No full text

    On the Barzilai and Borwein Choice of Steplength for the Gradient Method

    No full text
    In a recent paper, Barzilai and Borwein presented a new choice of steplength for the gradient method. We derive an interesting relationship between the Barzilai and Borwein gradient method and the shifted power method. This relationship allows us to establish the convergence of the Barzilai and Borwein method when applied to the problem of minimizing a strictly convex quadratic function (Barzilai and Borwein considered only 2-dimensional problems). Our point of view also allows us to explain the remarkable improvement obtained by using this new choice of steplength. Finally, for the 2-dimensional case we present some very interesting convergence rate results. We show that our Q and R-rate of convergence analysis is sharp and we compare it with the Barzilai and Borwein analysis
    corecore