1,472 research outputs found

    Supervised learning with hybrid global optimisation methods

    Get PDF

    Accelerated Parameter Estimation with DALEχ\chi

    Get PDF
    We consider methods for improving the estimation of constraints on a high-dimensional parameter space with a computationally expensive likelihood function. In such cases Markov chain Monte Carlo (MCMC) can take a long time to converge and concentrates on finding the maxima rather than the often-desired confidence contours for accurate error estimation. We employ DALEχ\chi (Direct Analysis of Limits via the Exterior of χ2\chi^2) for determining confidence contours by minimizing a cost function parametrized to incentivize points in parameter space which are both on the confidence limit and far from previously sampled points. We compare DALEχ\chi to the nested sampling algorithm implemented in MultiNest on a toy likelihood function that is highly non-Gaussian and non-linear in the mapping between parameter values and χ2\chi^2. We find that in high-dimensional cases DALEχ\chi finds the same confidence limit as MultiNest using roughly an order of magnitude fewer evaluations of the likelihood function. DALEχ\chi is open-source and available at https://github.com/danielsf/Dalex.git

    Convergence of the restricted Nelder-Mead algorithm in two dimensions

    Full text link
    The Nelder-Mead algorithm, a longstanding direct search method for unconstrained optimization published in 1965, is designed to minimize a scalar-valued function f of n real variables using only function values, without any derivative information. Each Nelder-Mead iteration is associated with a nondegenerate simplex defined by n+1 vertices and their function values; a typical iteration produces a new simplex by replacing the worst vertex by a new point. Despite the method's widespread use, theoretical results have been limited: for strictly convex objective functions of one variable with bounded level sets, the algorithm always converges to the minimizer; for such functions of two variables, the diameter of the simplex converges to zero, but examples constructed by McKinnon show that the algorithm may converge to a nonminimizing point. This paper considers the restricted Nelder-Mead algorithm, a variant that does not allow expansion steps. In two dimensions we show that, for any nondegenerate starting simplex and any twice-continuously differentiable function with positive definite Hessian and bounded level sets, the algorithm always converges to the minimizer. The proof is based on treating the method as a discrete dynamical system, and relies on several techniques that are non-standard in convergence proofs for unconstrained optimization.Comment: 27 page

    How Useful is Learning in Mitigating Mismatch Between Digital Twins and Physical Systems?

    Get PDF
    In the control of complex systems, we observe two diametrical trends: model-based control derived from digital twins, and model-free control through AI. There are also attempts to bridge the gap between the two by incorporating learning-based AI algorithms into digital twins to mitigate mismatches between the digital twin model and the physical system. One of the most straightforward approaches to this is direct input adaptation. In this paper, we ask whether it is useful to employ a generic learning algorithm in such a setting, and our conclusion is "not very". We denote an algorithm to be more useful than another algorithm based on three aspects: 1) it requires fewer data samples to reach a desired minimal performance, 2) it achieves better performance for a reasonable number of data samples, and 3) it accumulates less regret. In our evaluation, we randomly sample problems from an industrially relevant geometry assurance context and measure the aforementioned performance indicators of 16 different algorithms. Our conclusion is that blackbox optimization algorithms, designed to leverage specific properties of the problem, generally perform better than generic learning algorithms, once again finding that "there is no free lunch"

    How Useful is Learning in Mitigating Mismatch Between Digital Twins and Physical Systems?

    Get PDF
    In the control of complex systems, we observe two diametrical trends: model-based control derived from digital twins, and model-free control through AI. There are also attempts to bridge the gap between the two by incorporating learning-based AI algorithms into digital twins to mitigate mismatches between the digital twin model and the physical system. One of the most straightforward approaches to this is direct input adaptation. In this paper, we ask whether it is useful to employ a generic learning algorithm in such a setting, and our conclusion is "not very". We denote an algorithm to be more useful than another algorithm based on three aspects: 1) it requires fewer data samples to reach a desired minimal performance, 2) it achieves better performance for a reasonable number of data samples, and 3) it accumulates less regret. In our evaluation, we randomly sample problems from an industrially relevant geometry assurance context and measure the aforementioned performance indicators of 16 different algorithms. Our conclusion is that blackbox optimization algorithms, designed to leverage specific properties of the problem, generally perform better than generic learning algorithms, once again finding that "there is no free lunch"
    corecore