1,261 research outputs found

    Gradient Estimation using Lagrange Interpolation Polynomials

    Get PDF
    In this paper we use Lagrange interpolation polynomials to obtain good gradient estimations.This is e.g. important for nonlinear programming solvers.As an error criterion we take the mean squared error.This error can be split up into a deterministic and a stochastic error.We analyze these errors using (N times replicated) Lagrange interpolation polynomials.We show that the mean squared error is of order N-1+ 1 2d if we replicate the Lagrange estimation procedure N times and use 2d evaluations in each replicate.As a result the order of the mean squared error converges to N-1 if the number of evaluation points increases to infinity.Moreover, we show that our approach is also useful for deterministic functions in which numerical errors are involved.Finally, we consider the case of a fixed budget of evaluations.For this situation we provide an optimal division between the number of replicates and the number of evaluations in a replicate.estimation;interpolation;polynomials;non linear programming

    Gradient Estimation Schemes for Noisy Functions

    Get PDF
    In this paper we analyze different schemes for obtaining gradient estimates when the underlying function is noisy.Good gradient estimation is e.g. important for nonlinear programming solvers.As an error criterion we take the norm of the difference between the real and estimated gradients.This error can be split up into a deterministic and a stochastic error.For three finite difference schemes and two Design of Experiments (DoE) schemes we analyze both the deterministic and the stochastic errors.We also derive optimal step sizes for each scheme, such that the total error is minimized.Some of the schemes have the nice property that this step size also minimizes the variance of the error.Based on these results we show that to obtain good gradient estimates for noisy functions it is worthwhile to use DoE schemes.We recommend to implement such schemes in NLP solversnonlinear programming;finite elements;gradient estimation

    Constrained Optimization Involving Expensive Function Evaluations: A Sequential Approach

    Get PDF
    This paper presents a new sequential method for constrained non-linear optimization problems.The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD).Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing.The algorithm first creates an experimental design. In the design points the underlying functions are evaluated.Local linear approximations of the real model are obtained with help of weighted regression techniques.The approximating model is then optimized within a trust region to find the best feasible objective improving point.This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion.If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point.In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased.Convergence of the algorithm is guided by the size of this trust region.The focus of the approach is on getting good solutions with a limited number of function evaluations (not necessarily on reaching high accuracy).optimization;nonlinear programming

    A Game Theoretic Approach to Analyse Cooperation between Rural Households in Northern Nigeria

    Get PDF
    To improve the livelihood of the poor in Sub-Saharan Africa (SSA) much attention has been paid to the development of new agricultural technologies. We hypothesize that farmers can also improve their livelihood through cooperation. Partial cooperation, in which knowledge is shared or bargaining power improved, is relatively common in SSA, while cooperation where all resources are fully shared, which we address, has rarely been investigated. An important pre-requisite to establish such cooperation, is the need for a fair division rule for the gains of the cooperation. This paper combines linear programming and cooperative game theory to model the effects of cooperation of (individual) households on income and farm plans. Linear programming establishes insight in the optimal farm plans in cooperation, and cooperative game theory is used to generate fair division rules. The model is applied to a village in Northern Nigeria. Households are clustered based on socio-economic parameters, and we explore cooperation between clusters. Cooperation leads to increased income and results in changes in farm plans, because more efficient use of resources leads to more intensified agriculture (labour intensive ā€“ high value crops).Cooperations, Linear Programming, Nigeria, Livelihood, Agricultural and Food Policy, Community/Rural/Urban Development, Consumer/Household Economics, Environmental Economics and Policy, Food Consumption/Nutrition/Food Safety, Food Security and Poverty, International Relations/Trade, Marketing, Productivity Analysis, Research and Development/Tech Change/Emerging Technologies,

    Why Methods for Optimization Problems with Time-Consuming Function Evaluations and Integer Variables Should Use Global Approximation Models

    Get PDF
    This paper advocates the use of methods based on global approximation models for optimization problems with time-consuming function evaluations and integer variables.We show that methods based on local approximations may lead to the integer rounding of the optimal solution of the continuous problem, and even to worse solutions.Then we discuss a method based on global approximations.Test results show that such a method performs well, both for theoretical and practical examples, without suffering the disadvantages of methods based on local approximations.approximation models;black-box optimization;integer optimization

    Weakly cyclic graphs and delivery games

    Get PDF
    • ā€¦
    corecore