1,472 research outputs found
Recommended from our members
A new evolutionary search strategy for global optimization of high-dimensional problems
Global optimization of high-dimensional problems in practical applications remains a major challenge to the research community of evolutionary computation. The weakness of randomization-based evolutionary algorithms in searching high-dimensional spaces is demonstrated in this paper. A new strategy, SP-UCI is developed to treat complexity caused by high dimensionalities. This strategy features a slope-based searching kernel and a scheme of maintaining the particle population's capability of searching over the full search space. Examinations of this strategy on a suite of sophisticated composition benchmark functions demonstrate that SP-UCI surpasses two popular algorithms, particle swarm optimizer (PSO) and differential evolution (DE), on high-dimensional problems. Experimental results also corroborate the argument that, in high-dimensional optimization, only problems with well-formative fitness landscapes are solvable, and slope-based schemes are preferable to randomization-based ones. © 2011 Elsevier Inc. All rights reserved
Accelerated Parameter Estimation with DALE
We consider methods for improving the estimation of constraints on a
high-dimensional parameter space with a computationally expensive likelihood
function. In such cases Markov chain Monte Carlo (MCMC) can take a long time to
converge and concentrates on finding the maxima rather than the often-desired
confidence contours for accurate error estimation. We employ DALE (Direct
Analysis of Limits via the Exterior of ) for determining confidence
contours by minimizing a cost function parametrized to incentivize points in
parameter space which are both on the confidence limit and far from previously
sampled points. We compare DALE to the nested sampling algorithm
implemented in MultiNest on a toy likelihood function that is highly
non-Gaussian and non-linear in the mapping between parameter values and
. We find that in high-dimensional cases DALE finds the same
confidence limit as MultiNest using roughly an order of magnitude fewer
evaluations of the likelihood function. DALE is open-source and available
at https://github.com/danielsf/Dalex.git
Convergence of the restricted Nelder-Mead algorithm in two dimensions
The Nelder-Mead algorithm, a longstanding direct search method for
unconstrained optimization published in 1965, is designed to minimize a
scalar-valued function f of n real variables using only function values,
without any derivative information. Each Nelder-Mead iteration is associated
with a nondegenerate simplex defined by n+1 vertices and their function values;
a typical iteration produces a new simplex by replacing the worst vertex by a
new point. Despite the method's widespread use, theoretical results have been
limited: for strictly convex objective functions of one variable with bounded
level sets, the algorithm always converges to the minimizer; for such functions
of two variables, the diameter of the simplex converges to zero, but examples
constructed by McKinnon show that the algorithm may converge to a nonminimizing
point.
This paper considers the restricted Nelder-Mead algorithm, a variant that
does not allow expansion steps. In two dimensions we show that, for any
nondegenerate starting simplex and any twice-continuously differentiable
function with positive definite Hessian and bounded level sets, the algorithm
always converges to the minimizer. The proof is based on treating the method as
a discrete dynamical system, and relies on several techniques that are
non-standard in convergence proofs for unconstrained optimization.Comment: 27 page
How Useful is Learning in Mitigating Mismatch Between Digital Twins and Physical Systems?
In the control of complex systems, we observe two diametrical trends: model-based control derived from digital twins, and model-free control through AI. There are also attempts to bridge the gap between the two by incorporating learning-based AI algorithms into digital twins to mitigate mismatches between the digital twin model and the physical system. One of the most straightforward approaches to this is direct input adaptation. In this paper, we ask whether it is useful to employ a generic learning algorithm in such a setting, and our conclusion is "not very". We denote an algorithm to be more useful than another algorithm based on three aspects: 1) it requires fewer data samples to reach a desired minimal performance, 2) it achieves better performance for a reasonable number of data samples, and 3) it accumulates less regret. In our evaluation, we randomly sample problems from an industrially relevant geometry assurance context and measure the aforementioned performance indicators of 16 different algorithms. Our conclusion is that blackbox optimization algorithms, designed to leverage specific properties of the problem, generally perform better than generic learning algorithms, once again finding that "there is no free lunch"
How Useful is Learning in Mitigating Mismatch Between Digital Twins and Physical Systems?
In the control of complex systems, we observe two diametrical trends: model-based control derived from digital twins, and model-free control through AI. There are also attempts to bridge the gap between the two by incorporating learning-based AI algorithms into digital twins to mitigate mismatches between the digital twin model and the physical system. One of the most straightforward approaches to this is direct input adaptation. In this paper, we ask whether it is useful to employ a generic learning algorithm in such a setting, and our conclusion is "not very". We denote an algorithm to be more useful than another algorithm based on three aspects: 1) it requires fewer data samples to reach a desired minimal performance, 2) it achieves better performance for a reasonable number of data samples, and 3) it accumulates less regret. In our evaluation, we randomly sample problems from an industrially relevant geometry assurance context and measure the aforementioned performance indicators of 16 different algorithms. Our conclusion is that blackbox optimization algorithms, designed to leverage specific properties of the problem, generally perform better than generic learning algorithms, once again finding that "there is no free lunch"
- …