10 research outputs found

    Improving the Robustness of Difference of Convex Algorithm in the Research of a Global Optimum of a Nonconvex Differentiable Function Defined on a Bounded Closed Interval

    Get PDF
    International audienceIn this paper we present an algorithm for solving a DC problem non convex on an interval [a, b] of R. We use the DCA (Difference of Convex Algorithm) and the minimum of the average of two approximations of the function from a and b.This strategy has the advantage of giving in general a minimum to be situated in the attraction zone of the global minimum searched. After applying the DCA from this minimum we certainly arrive at the global minimum searched

    New technique for solving univariate global optimization

    Get PDF
    summary:In this paper, a new global optimization method is proposed for an optimization problem with twice differentiable objective function a single variable with box constraint. The method employs a difference of linear interpolant of the objective and a concave function, where the former is a continuous piecewise convex quadratic function underestimator. The main objectives of this research are to determine the value of the lower bound that does not need an iterative local optimizer. The proposed method is proven to have a finite convergence to locate the global optimum point. The numerical experiments indicate that the proposed method competes with another covering methods

    Tighter bound functions for nonconvex functions over simplexes

    No full text
    In this paper, we propose new lower and upper bound functions which can be used in computing a range of nonconvex functions over simplexes of Rn, or for solving global optimization problems over simplexes. We show that the new bounding functions are tighter than the classical bounding functions developed in the αBB method and the QBB method

    P A A NEW APPROACH FOR NONCONVEX SIP

    No full text
    Abstract: We propose a new method for solving nonconvex semi-infinite problems by using a concave overestimation function of the semi-infinite constraints. At each iteration we solve a nonlinear programming problem locally which gives a feasible point, for certain problems the feasibility is so important than the optimality (e.g. in control systems design). If we decide to stop our algorithm after a finite number of iterations, we have an optimal solution or an approximate solution which is feasible

    Tighter bound functions for nonconvex functions over simplexes

    No full text

    Convex quadratic underestimation and Branch and Bound for univariate global optimization with one nonconvex constraint

    Get PDF
    The purpose of this paper is to demonstrate that, for globally minimize one dimensional nonconvex problems with both twice differentiable function and constraint, we can propose an efficient algorithm based on Branch and Bound techniques. The method is first displayed in the simple case with an interval constraint. The extension is displayed afterwards to the general case with an additional nonconvex twice differentiable constraint. A quadratic bounding function which is better than the well known linear underestimator is proposed while w-subdivision is added to support the branching procedure. Computational results on several and various types of functions show the efficiency of our algorithms and their superiority with respect to the existing methods

    Combination of two underestimators for univariate global optimization

    Get PDF
    In this work, we propose a new underestimator in branch and bound algorithm for solving univariate global optimization problems. The new underestimator is a combination of two underestimators, the classical one used in αBB method (see Androulakis et al. [J. Glob. Optim. 7 (1995) 337–3637]) and the quadratic underestimator developed in Hoai An and Ouanes [RAIRO: OR 40 (2006) 285–302]. We show that the new underestimator is tighter than the two underestimators. A convex/concave test is used to accelerate the convergence of the proposed algorithm. The convergence of our algorithm is shown and a set of test problems given in Casado et al. [J. Glob. Optim. 25 (2003) 345–362] are solved efficiently

    WITHDRAWN: Nonconvex optimization based on DC programming and DCA in the search of a global optimum of a nonconvex function

    Get PDF
    This article has been withdrawn at the request of the author(s) and/or editor. The Publisher apologizes for any inconvenience this may cause. The full Elsevier Policy on Article Withdrawal can be found at http://www.elsevier.com/locate/withdrawalpolicy

    Computing real zeros of a polynomial by branch and bound and branch and reduce algorithms

    No full text
    In this paper we propose two algorithms based on branch and bound method and reduced interval techniques to compute all real zeros of a polynomial. Quadratic bounding functions are proposed which are better than the well known linear underestimator. Experimental result shows the efficiency of the two algorithms when facing ill-conditioned polynomials
    corecore