381 research outputs found

    1D1D to nDnD: A Meta Algorithm for Multivariate Global Optimization via Univariate Optimizers

    Full text link
    In this work, we propose a meta algorithm that can solve a multivariate global optimization problem using univariate global optimizers. Although the univariate global optimization does not receive much attention compared to the multivariate case, which is more emphasized in academia and industry; we show that it is still relevant and can be directly used to solve problems of multivariate optimization. We also provide the corresponding regret bounds in terms of the time horizon TT and the average regret of the univariate optimizer, when it is robust against nonnegative noises with robust regret guarantees.Comment: this article extends arXiv:2108.10859, arXiv:2201.0716

    Lipschitz gradients for global optimization in a one-point-based partitioning scheme

    Get PDF
    A global optimization problem is studied where the objective function f(x)f(x) is a multidimensional black-box function and its gradient f′(x)f'(x) satisfies the Lipschitz condition over a hyperinterval with an unknown Lipschitz constant KK. Different methods for solving this problem by using an a priori given estimate of KK, its adaptive estimates, and adaptive estimates of local Lipschitz constants are known in the literature. Recently, the authors have proposed a one-dimensional algorithm working with multiple estimates of the Lipschitz constant for f′(x)f'(x) (the existence of such an algorithm was a challenge for 15 years). In this paper, a new multidimensional geometric method evolving the ideas of this one-dimensional scheme and using an efficient one-point-based partitioning strategy is proposed. Numerical experiments executed on 800 multidimensional test functions demonstrate quite a promising performance in comparison with popular DIRECT-based methods.Comment: 25 pages, 4 figures, 5 tables. arXiv admin note: text overlap with arXiv:1103.205

    Application of reduced-set pareto-lipschitzian optimization to truss optimization

    Get PDF
    In this paper, a recently proposed global Lipschitz optimization algorithm Pareto-Lipschitzian Optimization with Reduced-set (PLOR) is further developed, investigated and applied to truss optimization problems. Partition patterns of the PLOR algorithm are similar to those of DIviding RECTangles (DIRECT), which was widely applied to different real-life problems. However here a set of all Lipschitz constants is reduced to just two: the maximal and the minimal ones. In such a way the PLOR approach is independent of any user-defined parameters and balances equally local and global search during the optimization process. An expanded list of other well-known DIRECT-type algorithms is used in investigation and experimental comparison using the standard test problems and truss optimization problems. The experimental investigation shows that the PLOR algorithm gives very competitive results to other DIRECT-type algorithms using standard test problems and performs pretty well on real truss optimization problems

    Lipschitz optimization methods for fitting a sum of damped sinusoids to a series of observations

    Get PDF
    A general nonlinear regression model is considered in the form of fitting a sum of damped sinusoids to a series of non-uniform observations. The problem of parameter estimation in this model is important in many applications like signal processing. The corresponding continuous optimization problem is typically difficult due to the high multiextremal character of the objective function. It is shown how Lipschitz-based deterministic methods can be well-suited for studying these challenging global optimization problems, when a limited computational budget is given and some guarantee of the found solution is required

    Regularized Nonlinear Acceleration

    Get PDF
    We describe a convergence acceleration technique for unconstrained optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system, whose solution can be updated online. This acceleration scheme runs in parallel to the base algorithm, providing improved estimates of the solution on the fly, while the original optimization method is running. Numerical experiments are detailed on classical classification problems
    • …
    corecore