78,144 research outputs found

    Constrained Optimization Involving Expensive Function Evaluations: A Sequential Approach

    Get PDF
    This paper presents a new sequential method for constrained non-linear optimization problems.The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD).Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing.The algorithm first creates an experimental design. In the design points the underlying functions are evaluated.Local linear approximations of the real model are obtained with help of weighted regression techniques.The approximating model is then optimized within a trust region to find the best feasible objective improving point.This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion.If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point.In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased.Convergence of the algorithm is guided by the size of this trust region.The focus of the approach is on getting good solutions with a limited number of function evaluations (not necessarily on reaching high accuracy).optimization;nonlinear programming

    Sinc-Galerkin estimation of diffusivity in parabolic problems

    Get PDF
    A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise

    Constrained Optimization Involving Expensive Function Evaluations:A Sequential Approach

    Get PDF
    This paper presents a new sequential method for constrained non-linear optimization problems.The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD).Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing.The algorithm first creates an experimental design. In the design points the underlying functions are evaluated.Local linear approximations of the real model are obtained with help of weighted regression techniques.The approximating model is then optimized within a trust region to find the best feasible objective improving point.This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion.If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point.In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased.Convergence of the algorithm is guided by the size of this trust region.The focus of the approach is on getting good solutions with a limited number of function evaluations (not necessarily on reaching high accuracy).

    Successive Convexification of Non-Convex Optimal Control Problems and Its Convergence Properties

    Full text link
    This paper presents an algorithm to solve non-convex optimal control problems, where non-convexity can arise from nonlinear dynamics, and non-convex state and control constraints. This paper assumes that the state and control constraints are already convex or convexified, the proposed algorithm convexifies the nonlinear dynamics, via a linearization, in a successive manner. Thus at each succession, a convex optimal control subproblem is solved. Since the dynamics are linearized and other constraints are convex, after a discretization, the subproblem can be expressed as a finite dimensional convex programming subproblem. Since convex optimization problems can be solved very efficiently, especially with custom solvers, this subproblem can be solved in time-critical applications, such as real-time path planning for autonomous vehicles. Several safe-guarding techniques are incorporated into the algorithm, namely virtual control and trust regions, which add another layer of algorithmic robustness. A convergence analysis is presented in continuous- time setting. By doing so, our convergence results will be independent from any numerical schemes used for discretization. Numerical simulations are performed for an illustrative trajectory optimization example.Comment: Updates: corrected wordings for LICQ. This is the full version. A brief version of this paper is published in 2016 IEEE 55th Conference on Decision and Control (CDC). http://ieeexplore.ieee.org/document/7798816

    Decomposition Algorithms for Stochastic Programming on a Computational Grid

    Get PDF
    We describe algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform. In particular, we examine serial and asynchronous versions of the L-shaped method and a trust-region method. The parallel platform of choice is the dynamic, heterogeneous, opportunistic platform provided by the Condor system. The algorithms are of master-worker type (with the workers being used to solve second-stage problems, and the MW runtime support library (which supports master-worker computations) is key to the implementation. Computational results are presented on large sample average approximations of problems from the literature.Comment: 44 page
    • …
    corecore