1,100 research outputs found

    Lipschitz gradients for global optimization in a one-point-based partitioning scheme

    Get PDF
    A global optimization problem is studied where the objective function f(x)f(x) is a multidimensional black-box function and its gradient f(x)f'(x) satisfies the Lipschitz condition over a hyperinterval with an unknown Lipschitz constant KK. Different methods for solving this problem by using an a priori given estimate of KK, its adaptive estimates, and adaptive estimates of local Lipschitz constants are known in the literature. Recently, the authors have proposed a one-dimensional algorithm working with multiple estimates of the Lipschitz constant for f(x)f'(x) (the existence of such an algorithm was a challenge for 15 years). In this paper, a new multidimensional geometric method evolving the ideas of this one-dimensional scheme and using an efficient one-point-based partitioning strategy is proposed. Numerical experiments executed on 800 multidimensional test functions demonstrate quite a promising performance in comparison with popular DIRECT-based methods.Comment: 25 pages, 4 figures, 5 tables. arXiv admin note: text overlap with arXiv:1103.205

    Deterministic global optimization using space-filling curves and multiple estimates of Lipschitz and Holder constants

    Get PDF
    In this paper, the global optimization problem minySF(y)\min_{y\in S} F(y) with SS being a hyperinterval in N\Re^N and F(y)F(y) satisfying the Lipschitz condition with an unknown Lipschitz constant is considered. It is supposed that the function F(y)F(y) can be multiextremal, non-differentiable, and given as a `black-box'. To attack the problem, a new global optimization algorithm based on the following two ideas is proposed and studied both theoretically and numerically. First, the new algorithm uses numerical approximations to space-filling curves to reduce the original Lipschitz multi-dimensional problem to a univariate one satisfying the H\"{o}lder condition. Second, the algorithm at each iteration applies a new geometric technique working with a number of possible H\"{o}lder constants chosen from a set of values varying from zero to infinity showing so that ideas introduced in a popular DIRECT method can be used in the H\"{o}lder global optimization. Convergence conditions of the resulting deterministic global optimization method are established. Numerical experiments carried out on several hundreds of test functions show quite a promising performance of the new algorithm in comparison with its direct competitors.Comment: 26 pages, 10 figures, 4 table

    A simple parameter-free and adaptive approach to optimization under a minimal local smoothness assumption

    Get PDF
    We study the problem of optimizing a function under a \emph{budgeted number of evaluations}. We only assume that the function is \emph{locally} smooth around one of its global optima. The difficulty of optimization is measured in terms of 1) the amount of \emph{noise} bb of the function evaluation and 2) the local smoothness, dd, of the function. A smaller dd results in smaller optimization error. We come with a new, simple, and parameter-free approach. First, for all values of bb and dd, this approach recovers at least the state-of-the-art regret guarantees. Second, our approach additionally obtains these results while being \textit{agnostic} to the values of both bb and dd. This leads to the first algorithm that naturally adapts to an \textit{unknown} range of noise bb and leads to significant improvements in a moderate and low-noise regime. Third, our approach also obtains a remarkable improvement over the state-of-the-art SOO algorithm when the noise is very low which includes the case of optimization under deterministic feedback (b=0b=0). There, under our minimal local smoothness assumption, this improvement is of exponential magnitude and holds for a class of functions that covers the vast majority of functions that practitioners optimize (d=0d=0). We show that our algorithmic improvement is borne out in experiments as we empirically show faster convergence on common benchmarks

    An Efficient Global Optimization Algorithm with Adaptive Estimates of the Local Lipschitz Constants

    Full text link
    In this work, we present a new deterministic partition-based Global Optimization (GO) algorithm that uses estimates of the local Lipschitz constants associated with different sub-regions of the domain of the objective function. The estimates of the local Lipschitz constants associated with each partition are the result of adaptively balancing the global and local information obtained so far from the algorithm, given in terms of absolute slopes. We motivate a coupling strategy with local optimization algorithms to accelerate the convergence speed of the proposed approach. In the end, we compare our approach HALO (Hybrid Adaptive Lipschitzian Optimization) with respect to popular GO algorithms using hundreds of test functions. From the numerical results, the performance of HALO is very promising and can extend our arsenal of efficient procedures for attacking challenging real-world GO problems. The Python code of HALO is publicly available on GitHub. https://github.com/dannyzx/HAL

    Application of reduced-set pareto-lipschitzian optimization to truss optimization

    Get PDF
    In this paper, a recently proposed global Lipschitz optimization algorithm Pareto-Lipschitzian Optimization with Reduced-set (PLOR) is further developed, investigated and applied to truss optimization problems. Partition patterns of the PLOR algorithm are similar to those of DIviding RECTangles (DIRECT), which was widely applied to different real-life problems. However here a set of all Lipschitz constants is reduced to just two: the maximal and the minimal ones. In such a way the PLOR approach is independent of any user-defined parameters and balances equally local and global search during the optimization process. An expanded list of other well-known DIRECT-type algorithms is used in investigation and experimental comparison using the standard test problems and truss optimization problems. The experimental investigation shows that the PLOR algorithm gives very competitive results to other DIRECT-type algorithms using standard test problems and performs pretty well on real truss optimization problems

    Learning to Approximate a Bregman Divergence

    Full text link
    Bregman divergences generalize measures such as the squared Euclidean distance and the KL divergence, and arise throughout many areas of machine learning. In this paper, we focus on the problem of approximating an arbitrary Bregman divergence from supervision, and we provide a well-principled approach to analyzing such approximations. We develop a formulation and algorithm for learning arbitrary Bregman divergences based on approximating their underlying convex generating function via a piecewise linear function. We provide theoretical approximation bounds using our parameterization and show that the generalization error Op(m1/2)O_p(m^{-1/2}) for metric learning using our framework matches the known generalization error in the strictly less general Mahalanobis metric learning setting. We further demonstrate empirically that our method performs well in comparison to existing metric learning methods, particularly for clustering and ranking problems.Comment: 19 pages, 4 figure
    corecore