223 research outputs found

    Combining filter method and dynamically dimensioned search for constrained global optimization

    Get PDF
    In this work we present an algorithm that combines the filter technique and the dynamically dimensioned search (DDS) for solving nonlinear and nonconvex constrained global optimization problems. The DDS is a stochastic global algorithm for solving bound constrained problems that in each iteration generates a randomly trial point perturbing some coordinates of the current best point. The filter technique controls the progress related to optimality and feasibility defining a forbidden region of points refused by the algorithm. This region can be given by the flat or slanting filter rule. The proposed algorithm does not compute or approximate any derivatives of the objective and constraint functions. Preliminary experiments show that the proposed algorithm gives competitive results when compared with other methods.The first author thanks a scholarship supported by the International Cooperation Program CAPES/ COFECUB at the University of Minho. The second and third authors thanks the support given by FCT (Funda¸c˜ao para Ciˆencia e Tecnologia, Portugal) in the scope of the projects: UID/MAT/00013/2013 and UID/CEC/00319/2013. The fourth author was partially supported by CNPq-Brazil grants 308957/2014-8 and 401288/2014-5.info:eu-repo/semantics/publishedVersio

    A Schur complement approach to preconditioning sparse linear least-squares problems with some dense rows

    Get PDF
    The effectiveness of sparse matrix techniques for directly solving large-scale linear least-squares problems is severely limited if the system matrix A has one or more nearly dense rows. In this paper, we partition the rows of A into sparse rows and dense rows (A s and A d ) and apply the Schur complement approach. A potential difficulty is that the reduced normal matrix AsTA s is often rank-deficient, even if A is of full rank. To overcome this, we propose explicitly removing null columns of A s and then employing a regularization parameter and using the resulting Cholesky factors as a preconditioner for an iterative solver applied to the symmetric indefinite reduced augmented system. We consider complete factorizations as well as incomplete Cholesky factorizations of the shifted reduced normal matrix. Numerical experiments are performed on a range of large least-squares problems arising from practical applications. These demonstrate the effectiveness of the proposed approach when combined with either a sparse parallel direct solver or a robust incomplete Cholesky factorization algorithm

    Filter-based stochastic algorithm for global optimization

    Get PDF
    We propose the general Filter-based Stochastic Algorithm (FbSA) for the global optimization of nonconvex and nonsmooth constrained problems. Under certain conditions on the probability distributions that generate the sample points, almost sure convergence is proved. In order to optimize problems with computationally expensive black-box objective functions, we develop the FbSA-RBF algorithm based on the general FbSA and assisted by Radial Basis Function (RBF) surrogate models to approximate the objective function. At each iteration, the resulting algorithm constructs/updates a surrogate model of the objective function and generates trial points using a dynamic coordinate search strategy similar to the one used in the Dynamically Dimensioned Search method. To identify a promising best trial point, a non-dominance concept based on the values of the surrogate model and the constraint violation at the trial points is used. Theoretical results concerning the sufficient conditions for the almost surely convergence of the algorithm are presented. Preliminary numerical experiments show that the FbSA-RBF is competitive when compared with other known methods in the literature.The authors are grateful to the anonymous referees for their fruitful comments and suggestions.The first and second authors were partially supported by Brazilian Funds through CAPES andCNPq by Grants PDSE 99999.009400/2014-01 and 309303/2017-6. The research of the thirdand fourth authors were partially financed by Portuguese Funds through FCT (Fundação para Ciência e Tecnologia) within the Projects UIDB/00013/2020 and UIDP/00013/2020 of CMAT-UM and UIDB/00319/2020

    Filter-based DIRECT method for constrained global optimization

    Get PDF
    This paper presents a DIRECT-type method that uses a filter methodology to assure convergence to a feasible and optimal solution of nonsmooth and nonconvex constrained global optimization problems. The filter methodology aims to give priority to the selection of hyperrectangles with feasible center points, followed by those with infeasible and non-dominated center points and finally by those that have infeasible and dominated center points. The convergence properties of the algorithm are analyzed. Preliminary numerical experiments show that the proposed filter-based DIRECT algorithm gives competitive results when compared with other DIRECT-type methods.The authors would like to thank two anonymous referees and the Associate Editor for their valuable comments and suggestions to improve the paper. This work has been supported by COMPETE: POCI-01-0145-FEDER-007043 and FCT - Fundac¸ao para a Ciência e Tecnologia within the projects UID/CEC/00319/2013 and ˆ UID/MAT/00013/2013.info:eu-repo/semantics/publishedVersio

    Conjugate Direction Methods and Polarity for Quadratic Hypersurfaces

    Get PDF
    We use some results from polarity theory to recast several geometric properties of Conjugate Gradient-based methods, for the solution of nonsingular symmetric linear systems. This approach allows us to pursue three main theoretical objectives. First, we can provide a novel geometric perspective on the generation of conjugate directions, in the context of positive definite systems. Second, we can extend the above geometric perspective to treat the generation of conjugate directions for handling indefinite linear systems. Third, by exploiting the geometric insight suggested by polarity theory, we can easily study the possible degeneracy (pivot breakdown) of Conjugate Gradient- based methods on indefinite linear systems. In particular, we prove that the degeneracy of the standard Conjugate Gradient on nonsingular indefinite linear systems can occur only once in the execution of the Conjugate Gradient

    Quasi-Newton-Based Preconditioning and Damped Quasi-Newton Schemes for Nonlinear Conjugate Gradient Methods

    Get PDF
    In this paper, we deal with matrix-free preconditioners for Nonlinear Conjugate Gradient (NCG) methods. In particular, we review proposals based on quasi-Newton updates, and either satisfying the secant equation or a secant-like equation at some of the previous iterates. Conditions are given proving that, in some sense, the proposed preconditioners also approximate the inverse of the Hessian matrix. In particular, the structure of the preconditioners depends both on low-rank updates along with some specific parameters. The low-rank updates are obtained as by-product of NCG iterations. Moreover, we consider the possibility to embed damped techniques within a class of preconditioners based on quasi-Newton updates. Damped methods have proved to be effective to enhance the performance of quasi-Newton updates, in those cases where the Wolfe linesearch conditions are hardly fulfilled. The purpose is to extend the idea behind damped methods also to improve NCG schemes, following a novel line of research in the literature. The results, which summarize an extended numerical experience using large-scale CUTEst problems, is reported, showing that these approaches can considerably improve the performance of NCG methods

    Exploiting damped techniques for nonlinear conjugate gradient methods

    Get PDF
    In this paper we propose the use of damped techniques within Nonlinear Conjugate Gradient (NCG) methods. Damped techniques were introduced by Powell and recently reproposed by Al-Baali and till now, only applied in the framework of quasi–Newton methods. We extend their use to NCG methods in large scale unconstrained optimization, aiming at possibly improving the efficiency and the robustness of the latter methods, especially when solving difficult problems. We consider both unpreconditioned and Preconditioned NCG (PNCG). In the latter case, we embed damped techniques within a class of preconditioners based on quasi–Newton updates. Our purpose is to possibly provide efficient preconditioners which approximate, in some sense, the inverse of the Hessian matrix, while still preserving information provided by the secant equation or some of its modifications. The results of an extensive numerical experience highlights that the proposed approach is quite promising
    corecore