102 research outputs found

    Filter-based DIRECT method for constrained global optimization

    Get PDF
    This paper presents a DIRECT-type method that uses a filter methodology to assure convergence to a feasible and optimal solution of nonsmooth and nonconvex constrained global optimization problems. The filter methodology aims to give priority to the selection of hyperrectangles with feasible center points, followed by those with infeasible and non-dominated center points and finally by those that have infeasible and dominated center points. The convergence properties of the algorithm are analyzed. Preliminary numerical experiments show that the proposed filter-based DIRECT algorithm gives competitive results when compared with other DIRECT-type methods.The authors would like to thank two anonymous referees and the Associate Editor for their valuable comments and suggestions to improve the paper. This work has been supported by COMPETE: POCI-01-0145-FEDER-007043 and FCT - Fundac¸ao para a Ciência e Tecnologia within the projects UID/CEC/00319/2013 and ˆ UID/MAT/00013/2013.info:eu-repo/semantics/publishedVersio

    Derivative-free optimization and filter methods to solve nonlinear constrained problems

    Get PDF
    In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java

    Combining filter method and dynamically dimensioned search for constrained global optimization

    Get PDF
    In this work we present an algorithm that combines the filter technique and the dynamically dimensioned search (DDS) for solving nonlinear and nonconvex constrained global optimization problems. The DDS is a stochastic global algorithm for solving bound constrained problems that in each iteration generates a randomly trial point perturbing some coordinates of the current best point. The filter technique controls the progress related to optimality and feasibility defining a forbidden region of points refused by the algorithm. This region can be given by the flat or slanting filter rule. The proposed algorithm does not compute or approximate any derivatives of the objective and constraint functions. Preliminary experiments show that the proposed algorithm gives competitive results when compared with other methods.The first author thanks a scholarship supported by the International Cooperation Program CAPES/ COFECUB at the University of Minho. The second and third authors thanks the support given by FCT (Funda¸c˜ao para Ciˆencia e Tecnologia, Portugal) in the scope of the projects: UID/MAT/00013/2013 and UID/CEC/00319/2013. The fourth author was partially supported by CNPq-Brazil grants 308957/2014-8 and 401288/2014-5.info:eu-repo/semantics/publishedVersio

    An adaptive sampling sequential quadratic programming method for nonsmooth stochastic optimization with upper-C2\mathcal{C}^2 objective

    Full text link
    We propose an optimization algorithm that incorporates adaptive sampling for stochastic nonsmooth nonconvex optimization problems with upper-C2\mathcal{C}^2 objective functions. Upper-C2\mathcal{C}^2 is a weakly concave property that exists naturally in many applications, particularly certain classes of solutions to parametric optimization problems, e.g., recourse of stochastic programming and projection into closed sets. Our algorithm is a stochastic sequential quadratic programming (SQP) method extended to nonsmooth problems with upperC2\mathcal{C}^2 objectives and is globally convergent in expectation with bounded algorithmic parameters. The capabilities of our algorithm are demonstrated by solving a joint production, pricing and shipment problem, as well as a realistic optimal power flow problem as used in current power grid industry practice.Comment: arXiv admin note: text overlap with arXiv:2204.0963
    corecore