2,391 research outputs found
Convergence of the restricted Nelder-Mead algorithm in two dimensions
The Nelder-Mead algorithm, a longstanding direct search method for
unconstrained optimization published in 1965, is designed to minimize a
scalar-valued function f of n real variables using only function values,
without any derivative information. Each Nelder-Mead iteration is associated
with a nondegenerate simplex defined by n+1 vertices and their function values;
a typical iteration produces a new simplex by replacing the worst vertex by a
new point. Despite the method's widespread use, theoretical results have been
limited: for strictly convex objective functions of one variable with bounded
level sets, the algorithm always converges to the minimizer; for such functions
of two variables, the diameter of the simplex converges to zero, but examples
constructed by McKinnon show that the algorithm may converge to a nonminimizing
point.
This paper considers the restricted Nelder-Mead algorithm, a variant that
does not allow expansion steps. In two dimensions we show that, for any
nondegenerate starting simplex and any twice-continuously differentiable
function with positive definite Hessian and bounded level sets, the algorithm
always converges to the minimizer. The proof is based on treating the method as
a discrete dynamical system, and relies on several techniques that are
non-standard in convergence proofs for unconstrained optimization.Comment: 27 page
Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains
In this paper, we consider comparison-based adaptive stochastic algorithms
for solving numerical optimisation problems. We consider a specific subclass of
algorithms that we call comparison-based step-size adaptive randomized search
(CB-SARS), where the state variables at a given iteration are a vector of the
search space and a positive parameter, the step-size, typically controlling the
overall standard deviation of the underlying search distribution.We investigate
the linear convergence of CB-SARS on\emph{scaling-invariant} objective
functions. Scaling-invariantfunctions preserve the ordering of points with
respect to their functionvalue when the points are scaled with the same
positive parameter (thescaling is done w.r.t. a fixed reference point). This
class offunctions includes norms composed with strictly increasing functions
aswell as many non quasi-convex and non-continuousfunctions. On
scaling-invariant functions, we show the existence of ahomogeneous Markov
chain, as a consequence of natural invarianceproperties of CB-SARS (essentially
scale-invariance and invariance tostrictly increasing transformation of the
objective function). We thenderive sufficient conditions for \emph{global
linear convergence} ofCB-SARS, expressed in terms of different stability
conditions of thenormalised homogeneous Markov chain (irreducibility,
positivity, Harrisrecurrence, geometric ergodicity) and thus define a general
methodologyfor proving global linear convergence of CB-SARS algorithms
onscaling-invariant functions. As a by-product we provide aconnexion between
comparison-based adaptive stochasticalgorithms and Markov chain Monte Carlo
algorithms.Comment: SIAM Journal on Optimization, Society for Industrial and Applied
Mathematics, 201
Recommended from our members
A new evolutionary search strategy for global optimization of high-dimensional problems
Global optimization of high-dimensional problems in practical applications remains a major challenge to the research community of evolutionary computation. The weakness of randomization-based evolutionary algorithms in searching high-dimensional spaces is demonstrated in this paper. A new strategy, SP-UCI is developed to treat complexity caused by high dimensionalities. This strategy features a slope-based searching kernel and a scheme of maintaining the particle population's capability of searching over the full search space. Examinations of this strategy on a suite of sophisticated composition benchmark functions demonstrate that SP-UCI surpasses two popular algorithms, particle swarm optimizer (PSO) and differential evolution (DE), on high-dimensional problems. Experimental results also corroborate the argument that, in high-dimensional optimization, only problems with well-formative fitness landscapes are solvable, and slope-based schemes are preferable to randomization-based ones. © 2011 Elsevier Inc. All rights reserved
Variable Metric Random Pursuit
We consider unconstrained randomized optimization of smooth convex objective
functions in the gradient-free setting. We analyze Random Pursuit (RP)
algorithms with fixed (F-RP) and variable metric (V-RP). The algorithms only
use zeroth-order information about the objective function and compute an
approximate solution by repeated optimization over randomly chosen
one-dimensional subspaces. The distribution of search directions is dictated by
the chosen metric.
Variable Metric RP uses novel variants of a randomized zeroth-order Hessian
approximation scheme recently introduced by Leventhal and Lewis (D. Leventhal
and A. S. Lewis., Optimization 60(3), 329--245, 2011). We here present (i) a
refined analysis of the expected single step progress of RP algorithms and
their global convergence on (strictly) convex functions and (ii) novel
convergence bounds for V-RP on strongly convex functions. We also quantify how
well the employed metric needs to match the local geometry of the function in
order for the RP algorithms to converge with the best possible rate.
Our theoretical results are accompanied by numerical experiments, comparing
V-RP with the derivative-free schemes CMA-ES, Implicit Filtering, Nelder-Mead,
NEWUOA, Pattern-Search and Nesterov's gradient-free algorithms.Comment: 42 pages, 6 figures, 15 tables, submitted to journal, Version 3:
majorly revised second part, i.e. Section 5 and Appendi
Recommended from our members
A solution to the crucial problem of population degeneration in high-dimensional evolutionary optimization
Three popular evolutionary optimization algorithms are tested on high-dimensional benchmark functions. An important phenomenon responsible for many failures - population degeneration - is discovered. That is, through evolution, the population of searching particles degenerates into a subspace of the search space, and the global optimum is exclusive from the subspace. Subsequently, the search will tend to be confined to this subspace and eventually miss the global optimum. Principal components analysis (PCA) is introduced to discover population degeneration and to remedy its adverse effects. The experiment results reveal that an algorithm's efficacy and efficiency are closely related to the population degeneration phenomenon. Guidelines for improving evolutionary algorithms for high-dimensional global optimization are addressed. An application to highly nonlinear hydrological models demonstrates the efficacy of improved evolutionary algorithms in solving complex practical problems. © 2011 IEEE
Feedback control optimisation of ESR experiments
Numerically optimised microwave pulses are used to increase excitation
efficiency and modulation depth in electron spin resonance experiments
performed on a spectrometer equipped with an arbitrary waveform generator. The
optimisation procedure is sample-specific and reminiscent of the magnet
shimming process used in the early days of nuclear magnetic resonance -- an
objective function (for example, echo integral in a spin echo experiment) is
defined and optimised numerically as a function of the pulse waveform vector
using noise-resilient gradient-free methods. We found that the resulting shaped
microwave pulses achieve higher excitation bandwidth and better echo modulation
depth than the pulse shapes used as the initial guess. Although the method is
theoretically less sophisticated than simulation based quantum optimal control
techniques, it has the advantage of being free of the linear response
approximation; rapid electron spin relaxation also means that the optimisation
takes only a few seconds. This makes the procedure fast, convenient, and easy
to use. An important application of this method is at the final stage of the
implementation of theoretically designed pulse shapes: compensation of pulse
distortions introduced by the instrument. The performance is illustrated using
spin echo and out-of-phase electron spin echo envelope modulation experiments.
Interface code between Bruker SpinJet arbitrary waveform generator and Matlab
is included in versions 2.2 and later of the Spinach library
- …