17 research outputs found

    Pure adaptive search in global optimization

    Full text link
    Pure adaptive seach iteratively constructs a sequence of interior points uniformly distributed within the corresponding sequence of nested improving regions of the feasible space. That is, at any iteration, the next point in the sequence is uniformly distributed over the region of feasible space containing all points that are strictly superior in value to the previous points in the sequence. The complexity of this algorithm is measured by the expected number of iterations required to achieve a given accuracy of solution. We show that for global mathematical programs satisfying the Lipschitz condition, its complexity increases at most linearly in the dimension of the problem.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47923/1/10107_2005_Article_BF01585710.pd

    Global optimization: techniques and applications

    Get PDF
    Optimization problems arise in a wide variety of scientific disciplines. In many practical problems, a global optimum is desired, yet the objective function has multiple local optima. A number of techniques aimed at solving the global optimization problem have emerged in the last 30 years of research. This thesis first reviews techniques for local optimization and then discusses many of the stochastic and deterministic methods for global optimization that are in use today. Finally, this thesis shows how to apply global optimization techniques to two practical problems: the image segmentation problem (from imaging science) and the 3-D registration problem (from computer vision)

    Improving Hit-and-Run for global optimization

    Full text link
    Improving Hit-and-Run is a random search algorithm for global optimization that at each iteration generates a candidate point for improvement that is uniformly distributed along a randomly chosen direction within the feasible region. The candidate point is accepted as the next iterate if it offers an improvement over the current iterate. We show that for positive definite quadratic programs, the expected number of function evaluations needed to arbitrarily well approximate the optimal solution is at most O(n 5/2 ) where n is the dimension of the problem. Improving Hit-and-Run when applied to global optimization problems can therefore be expected to converge polynomially fast as it approaches the global optimum.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44932/1/10898_2005_Article_BF01096737.pd

    Optimization by adaptive stochastic descent

    Get PDF
    Published: March 16, 2018When standard optimization methods fail to find a satisfactory solution for a parameter fitting problem, a tempting recourse is to adjust parameters manually. While tedious, this approach can be surprisingly powerful in terms of achieving optimal or near-optimal solutions. This paper outlines an optimization algorithm, Adaptive Stochastic Descent (ASD), that has been designed to replicate the essential aspects of manual parameter fitting in an automated way. Specifically, ASD uses simple principles to form probabilistic assumptions about (a) which parameters have the greatest effect on the objective function, and (b) optimal step sizes for each parameter. We show that for a certain class of optimization problems (namely, those with a moderate to large number of scalar parameter dimensions, especially if some dimensions are more important than others), ASD is capable of minimizing the objective function with far fewer function evaluations than classic optimization methods, such as the Nelder-Mead nonlinear simplex, Levenberg-Marquardt gradient descent, simulated annealing, and genetic algorithms. As a case study, we show that ASD outperforms standard algorithms when used to determine how resources should be allocated in order to minimize new HIV infections in Swaziland.Cliff C. Kerr, Salvador Dura-Bernal, Tomasz G. Smolinski, George L. Chadderdon, David P. Wilso

    OPTIMAL COMPUTING BUDGET ALLOCATION FOR STOCHASTIC SIMULATION OPTIMIZATION

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore