51,142 research outputs found

    A globally convergent primal-dual interior-point filter method for nonlinear programming

    Get PDF
    In this paper, the filter technique of Fletcher and Leyffer (1997) is used to globalize the primal-dual interior-point algorithm for nonlinear programming, avoiding the use of merit functions and the updating of penalty parameters. The new algorithm decomposes the primal-dual step obtained from the perturbed first-order necessary conditions into a normal and a tangential step, whose sizes are controlled by a trust-region type parameter. Each entry in the filter is a pair of coordinates: one resulting from feasibility and centrality, and associated with the normal step; the other resulting from optimality (complementarity and duality), and related with the tangential step. Global convergence to first-order critical points is proved for the new primal-dual interior-point filter algorithm

    Constrained Optimization Involving Expensive Function Evaluations: A Sequential Approach

    Get PDF
    This paper presents a new sequential method for constrained non-linear optimization problems.The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD).Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing.The algorithm first creates an experimental design. In the design points the underlying functions are evaluated.Local linear approximations of the real model are obtained with help of weighted regression techniques.The approximating model is then optimized within a trust region to find the best feasible objective improving point.This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion.If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point.In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased.Convergence of the algorithm is guided by the size of this trust region.The focus of the approach is on getting good solutions with a limited number of function evaluations (not necessarily on reaching high accuracy).optimization;nonlinear programming

    Constrained Optimization Involving Expensive Function Evaluations:A Sequential Approach

    Get PDF
    This paper presents a new sequential method for constrained non-linear optimization problems.The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD).Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing.The algorithm first creates an experimental design. In the design points the underlying functions are evaluated.Local linear approximations of the real model are obtained with help of weighted regression techniques.The approximating model is then optimized within a trust region to find the best feasible objective improving point.This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion.If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point.In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased.Convergence of the algorithm is guided by the size of this trust region.The focus of the approach is on getting good solutions with a limited number of function evaluations (not necessarily on reaching high accuracy).

    Global Convergence of a New Nonmonotone Filter Method for Equality Constrained Optimization

    Get PDF
    A new nonmonotone filter trust region method is introduced for solving optimization problems with equality constraints. This method directly uses the dominated area of the filter as an acceptability criterion for trial points and allows the dominated area decreasing nonmonotonically. Compared with the filter-type method, our method has more flexible criteria and can avoid Maratos effect in a certain degree. Under reasonable assumptions, we prove that the given algorithm is globally convergent to a first order stationary point for all possible choices of the starting point. Numerical tests are presented to show the effectiveness of the proposed algorithm

    Accelerated optimization of mixed EM/circuit structures

    Get PDF
    We review recent developments in Space Mapping (SM) optimization. The Aggressive Space Mapping (ASM) technique is illustrated through a step-by-step numerical example based on the Rosenbrock function. The Trust Region Aggressive Space Mapping (TRASM) algorithm is described. TRASM integrates a trust region methodology with the ASM technique. It improves the uniqueness of the extraction phase by utilizing a recursive multi-point parameter extraction process. The algorithm is illustrated by the design of an HTS filter using Sonnet’s em. The new Hybrid Aggressive Space Mapping (HASM) algorithm is briefly reviewed. It is based on a novel lemma that enables smooth switching from SM optimization to direct optimization if SM is not converging. It is illustrated by the design of a six-section H-plane waveguide filter.Consejo Nacional de Ciencia y Tecnologí

    Variable-fidelity optimization of microwave filters using co-kriging and trust regions

    Get PDF
    In this paper, a variable-fidelity optimization methodology for simulation-driven design optimization of filters is presented. We exploit electromagnetic (EM) simulations of different accuracy. Densely sampled but cheap low-fidelity EM data is utilized to create a fast kriging interpolation model (the surrogate), subsequently used to find an optimum design of the high-fidelity EM model of the filter under consideration. The high-fidelity data accumulated during the optimization process is combined with the existing surrogate using the co-kriging technique. This allows us to improve the surrogate model accuracy while approaching the optimum. The convergence of the algorithm is ensured by embedding it into the trust region framework that adaptively adjusts the search radius based on the quality of the predictions made by the co-kriging model. Three filter design cases are given for demonstration and verification purposes
    • …
    corecore