7 research outputs found

    An interior point method for nonlinear constrained derivative-free optimization

    Full text link
    In this paper we consider constrained optimization problems where both the objective and constraint functions are of the black-box type. Furthermore, we assume that the nonlinear inequality constraints are non-relaxable, i.e. their values and that of the objective function cannot be computed outside of the feasible region. This situation happens frequently in practice especially in the black-box setting where function values are typically computed by means of complex simulation programs which may fail to execute if the considered point is outside of the feasible region. For such problems, we propose a new derivative-free optimization method which is based on the use of a merit function that handles inequality constraints by means of a log-barrier approach and equality constraints by means of a quadratic penalty approach. We prove convergence of the proposed method to KKT stationary points of the problem under quite mild assumptions. Furthermore, we also carry out a preliminary numerical experience on standard test problems and comparison with a state-of-the-art solver which shows efficiency of the proposed method.Comment: We dropped the convexity assumption to take into account that convexity is no longer required, we changed the theoretical analysis, exposition of the main algorithm has changed. We first present a simpler method and then the main algorithm. Numerical results have been a lot extended by adding some compariso

    A primal-dual interior-point relaxation method with adaptively updating barrier for nonlinear programs

    Full text link
    Based on solving an equivalent parametric equality constrained mini-max problem of the classic logarithmic-barrier subproblem, we present a novel primal-dual interior-point relaxation method for nonlinear programs. In the proposed method, the barrier parameter is updated in every step as done in interior-point methods for linear programs, which is prominently different from the existing interior-point methods and the relaxation methods for nonlinear programs. Since our update for the barrier parameter is autonomous and adaptive, the method has potential of avoiding the possible difficulties caused by the unappropriate initial selection of the barrier parameter and speeding up the convergence to the solution. Moreover, it can circumvent the jamming difficulty of global convergence caused by the interior-point restriction for nonlinear programs and improve the ill conditioning of the existing primal-dual interiorpoint methods as the barrier parameter is small. Under suitable assumptions, our method is proved to be globally convergent and locally quadratically convergent. The preliminary numerical results on a well-posed problem for which many line-search interior-point methods fail to find the minimizer and a set of test problems from the CUTE collection show that our method is efficient.Comment: submitted to SIOPT on April 14, 202

    A one-phase interior point method for nonconvex optimization

    Full text link
    The work of Wachter and Biegler suggests that infeasible-start interior point methods (IPMs) developed for linear programming cannot be adapted to nonlinear optimization without significant modification, i.e., using a two-phase or penalty method. We propose an IPM that, by careful initialization and updates of the slack variables, is guaranteed to find a first-order certificate of local infeasibility, local optimality or unboundedness of the (shifted) feasible region. Our proposed algorithm differs from other IPM methods for nonconvex programming because we reduce primal feasibility at the same rate as the barrier parameter. This gives an algorithm with more robust convergence properties and closely resembles successful algorithms from linear programming. We implement the algorithm and compare with IPOPT on a subset of CUTEst problems. Our algorithm requires a similar median number of iterations, but fails on only 9% of the problems compared with 16% for IPOPT. Experiments on infeasible variants of the CUTEst problems indicate superior performance for detecting infeasibility. The code for our implementation can be found at https://github.com/ohinder/OnePhase .Comment: fixed typo in sign of dual multiplier in KKT syste
    corecore