2 research outputs found

    Convergence Speed of an Integral Method for Computing the Essential Supremum

    No full text
    . We give an equivalence between the tasks of computing the essential supremum of a summable function and of finding a certain zero of a one-dimensional convex function. Interpreting the integral method as Newton-type method we show that in the case of objective functions with an essential supremum that is not spread the algorithm can work very slowly. For this reason we propose a method of accelerating the algorithm which is in some respect similar to the method of Aitken/Steffensen. Key words: essential supremum, convergence speed, integral global optimization, Newton algorithm 1. Introduction The problem of determining the essential supremum of a summable function f over its domain D ae IR n can be regarded as a generalization of the task of global optimization. If the maximum of f over D does not exist, since D is not closed or f is not upper semicontinuous, the supremum can be determined instead of the maximum. If f is not completely defined for each point, as in Lebesgue sp..
    corecore