20 research outputs found

    Design-Optimization with a Limited Data-Budget

    Get PDF
    Design-optimization with realistic computer codes is a ubiquitously challenging task. Typically, we have to execute thousands of simulations in order to achieve a globally optimum design. However, since realistic models may take hours or even days to complete a single simulation, global optimization is infeasible for all but the simplest models. We are necessarily limited to just a handful of simulations. Bayesian global optimization (BGO) is a computational framework built upon Gaussian process regression that allows us to actively select which simulations to make in order to reach our objective or its gradients. It only assumes that the objective is measurable at any given design point either experimentally or via a computer simulation. We have implemented BGO in Python and created a nanoHUB tool that applies the concept to the problem of the structure determination of an arbitrary cluster of atoms. The tool works as follows: First, it generates an initial data pool consisting of random structures and their associated energies as well as a test design pool consisting of structures that will be tested for optimality. Then, it constructs a Gaussian process model of the energy surface and employs BGO to find the minimum energy cluster among the test pool. The process runs until either the maximum expected improvement of future simulations falls below a threshold or the number of maximum iterations is reached

    Computationally Efficient Solution of Inverse Problem Using Bayesian Global Optimization Approach

    Get PDF
    Models have parameters that need to be determined from experimental observations. The problem of determining these parameters is known as the inverse problem or the model calibration problem. Solving inverse problems can be ubiquitously difficult because when the models involved are computationally expensive, one can only make a limited number of simulations. This work addresses the issue of solving an inverse problem with a limited data budget. Towards this end, we pose the inverse problem as the problem of minimizing a loss function that measures the discrepancy between model predictions and experimental measurements. Then, we employ Bayesian global optimization (BGO) to actively select the most informative simulations until either the expected improvement falls below a user defined threshold or our computational budget has been exhausted. We apply our results to the problem of estimating the kinetic rate coefficients modeling the catalytic conversion of nitrate to nitrogen using real experimental data

    Multi-objective Optimization under Uncertainty using the Hyper-volume Expected Improvement

    Get PDF
    The design of real engineering systems requires the optimization of multiple quantities of interest. In the electric motor design, one wants to maximize the average torque and minimize the torque variation. A study has shown that these attributes vary for different geometries of the rotor teeth. However, simulations of a large number of designs cannot be performed due to their high cost. In many problems, design optimization of multi-objective functions is a very challenging task due to the difficulty to evaluate the expectation of the objectives. Current multi-objective optimization (MOO) techniques, e.g., evolutionary algorithms cannot solve such problems because they require hundreds of thousands of function evaluations. Therefore, an alternative methodology must be used to identify a Pareto front, a set of optimal designs of MOO. Recent extensions of Bayesian global optimization are able to do exactly that. The idea is to replace the expensive objective functions with cheap-to-evaluate probabilistic surrogates trained using few input-output pairs and to sequentially query designs that maximize the improvement of the Pareto front. For these purposes, we developed SMOOT, a Rappture tool built on a NanoHUB platform. It enables experimentalists to optimize their expensive processes without a need to understand the optimization methodology and guides them to make better decisions in order to find optimal designs

    Design Optimization of a Stochastic Multi-Objective Problem: Gaussian Process Regressions for Objective Surrogates

    Get PDF
    Multi-objective optimization (MOO) problems arise frequently in science and engineering situations. In an optimization problem, we want to find the set of input parameters that generate the set of optimal outputs, mathematically known as the Pareto frontier (PF). Solving the MOO problem is a challenge since expensive experiments can be performed only a constrained number of times and there is a limited set of data to work with, e.g. a roll-to-roll microwave plasma chemical vapor deposition (MPCVD) reactor for manufacturing high quality graphene. State-of-the-art techniques, e.g. evolutionary algorithms; particle swarm optimization, require a large amount of observations and do not completely reveal the true PF. Recent extensions of Bayesian global optimization (BGO) are able to address the problems where the objective functions are expensive to evaluate, by replacing the expensive objective functions with cheap-to-evaluate surrogates trained with few input-output pairs. These surrogates provide prediction error bars that correspond to the epistemic uncertainty induced by limited data. BGO uses an information acquisition function (IAF) to quantify the improvement that a hypothetical experiment could make to the state of knowledge of the Pareto front in the MOO problem. This allows us to sequentially select the designs that maximize this enhancement. In this work we developed a NanoHUB tool that enables experimentalists to use BGO using an extension of the expected improvement over hypervolume (EHVI) IAF to provide solutions to MOO problems under uncertainty. We verified the tool through synthetic examples and we used it in the challenging task of optimizing the manufacturing of high-quality graphene using an MPCVD reactor
    corecore