87,464 research outputs found
Recommended from our members
Optimal exact designs of experiments via Mixed Integer Nonlinear Programming
Optimal exact designs are problematic to find and study because there is no unified theory for determining them and studyingtheir properties. Each has its own challenges and when a method exists to confirm the design optimality, it is invariablyapplicable to the particular problem only.We propose a systematic approach to construct optimal exact designs by incorporatingthe Cholesky decomposition of the Fisher Information Matrix in a Mixed Integer Nonlinear Programming formulation. Asexamples, we apply the methodology to find D- and A-optimal exact designs for linear and nonlinear models using global orlocal optimizers. Our examples include design problems with constraints on the locations or the number of replicates at theoptimal design points
Recommended from our members
A comparison of general-purpose optimization algorithms forfinding optimal approximate experimental designs
Several common general purpose optimization algorithms are compared for findingA- and D-optimal designs for different types of statistical models of varying complexity,including high dimensional models with five and more factors. The algorithms of interestinclude exact methods, such as the interior point method, the Nelder–Mead method, theactive set method, the sequential quadratic programming, and metaheuristic algorithms,such as particle swarm optimization, simulated annealing and genetic algorithms.Several simulations are performed, which provide general recommendations on theutility and performance of each method, including hybridized versions of metaheuristicalgorithms for finding optimal experimental designs. A key result is that general-purposeoptimization algorithms, both exact methods and metaheuristic algorithms, perform wellfor finding optimal approximate experimental designs
Optimal designs for rational function regression
We consider optimal non-sequential designs for a large class of (linear and
nonlinear) regression models involving polynomials and rational functions with
heteroscedastic noise also given by a polynomial or rational weight function.
The proposed method treats D-, E-, A-, and -optimal designs in a
unified manner, and generates a polynomial whose zeros are the support points
of the optimal approximate design, generalizing a number of previously known
results of the same flavor. The method is based on a mathematical optimization
model that can incorporate various criteria of optimality and can be solved
efficiently by well established numerical optimization methods. In contrast to
previous optimization-based methods proposed for similar design problems, it
also has theoretical guarantee of its algorithmic efficiency; in fact, the
running times of all numerical examples considered in the paper are negligible.
The stability of the method is demonstrated in an example involving high degree
polynomials. After discussing linear models, applications for finding locally
optimal designs for nonlinear regression models involving rational functions
are presented, then extensions to robust regression designs, and trigonometric
regression are shown. As a corollary, an upper bound on the size of the support
set of the minimally-supported optimal designs is also found. The method is of
considerable practical importance, with the potential for instance to impact
design software development. Further study of the optimality conditions of the
main optimization model might also yield new theoretical insights.Comment: 25 pages. Previous version updated with more details in the theory
and additional example
Identification of quasi-optimal regions in the design space using surrogate modeling
The use of Surrogate Based Optimization (SBO) is widely spread in engineering design to find optimal performance characteristics of expensive simulations (forward analysis: from input to optimal output). However, often the practitioner knows a priori the desired performance and is interested in finding the associated input parameters (reverse analysis: from desired output to input). A popular method to solve such reverse (inverse) problems is to minimize the error between the simulated performance and the desired goal. However, there might be multiple quasi-optimal solutions to the problem. In this paper, the authors propose a novel method to efficiently solve inverse problems and to sample Quasi-Optimal Regions (QORs) in the input (design) space more densely. The development of this technique, based on the probability of improvement criterion and kriging models, is driven by a real-life problem from bio-mechanics, i.e., determining the elasticity of the (rabbit) tympanic membrane, a membrane that converts acoustic sound wave into vibrations of the middle ear ossicular bones
Reliability-based design optimization using kriging surrogates and subset simulation
The aim of the present paper is to develop a strategy for solving
reliability-based design optimization (RBDO) problems that remains applicable
when the performance models are expensive to evaluate. Starting with the
premise that simulation-based approaches are not affordable for such problems,
and that the most-probable-failure-point-based approaches do not permit to
quantify the error on the estimation of the failure probability, an approach
based on both metamodels and advanced simulation techniques is explored. The
kriging metamodeling technique is chosen in order to surrogate the performance
functions because it allows one to genuinely quantify the surrogate error. The
surrogate error onto the limit-state surfaces is propagated to the failure
probabilities estimates in order to provide an empirical error measure. This
error is then sequentially reduced by means of a population-based adaptive
refinement technique until the kriging surrogates are accurate enough for
reliability analysis. This original refinement strategy makes it possible to
add several observations in the design of experiments at the same time.
Reliability and reliability sensitivity analyses are performed by means of the
subset simulation technique for the sake of numerical efficiency. The adaptive
surrogate-based strategy for reliability estimation is finally involved into a
classical gradient-based optimization algorithm in order to solve the RBDO
problem. The kriging surrogates are built in a so-called augmented reliability
space thus making them reusable from one nested RBDO iteration to the other.
The strategy is compared to other approaches available in the literature on
three academic examples in the field of structural mechanics.Comment: 20 pages, 6 figures, 5 tables. Preprint submitted to Springer-Verla
MATSuMoTo: The MATLAB Surrogate Model Toolbox For Computationally Expensive Black-Box Global Optimization Problems
MATSuMoTo is the MATLAB Surrogate Model Toolbox for computationally
expensive, black-box, global optimization problems that may have continuous,
mixed-integer, or pure integer variables. Due to the black-box nature of the
objective function, derivatives are not available. Hence, surrogate models are
used as computationally cheap approximations of the expensive objective
function in order to guide the search for improved solutions. Due to the
computational expense of doing a single function evaluation, the goal is to
find optimal solutions within very few expensive evaluations. The multimodality
of the expensive black-box function requires an algorithm that is able to
search locally as well as globally. MATSuMoTo is able to address these
challenges. MATSuMoTo offers various choices for surrogate models and surrogate
model mixtures, initial experimental design strategies, and sampling
strategies. MATSuMoTo is able to do several function evaluations in parallel by
exploiting MATLAB's Parallel Computing Toolbox.Comment: 13 pages, 7 figure
Polynomial Response Surface Approximations for the Multidisciplinary Design Optimization of a High Speed Civil Transport
Surrogate functions have become an important tool in multidisciplinary design optimization to deal with noisy functions, high computational cost, and the practical difficulty of integrating legacy disciplinary computer codes. A combination of mathematical, statistical, and engineering techniques, well known in other contexts, have made polynomial surrogate functions viable for MDO. Despite the obvious limitations imposed by sparse high fidelity data in high dimensions and the locality of low order polynomial approximations, the success of the panoply of techniques based on polynomial response surface approximations for MDO shows that the implementation details are more important than the underlying approximation method (polynomial, spline, DACE, kernel regression, etc.). This paper surveys some of the ancillary techniques—statistics, global search, parallel computing, variable complexity modeling—that augment the construction and use of polynomial surrogates
Recommended from our members
T-optimal designs formulti-factor polynomial regressionmodelsvia a semidefinite relaxation method
We consider T-optimal experiment design problems for discriminating multi-factor polynomial regression models wherethe design space is defined by polynomial inequalities and the regression parameters are constrained to given convex sets.Our proposed optimality criterion is formulated as a convex optimization problem with a moment cone constraint. When theregression models have one factor, an exact semidefinite representation of the moment cone constraint can be applied to obtainan equivalent semidefinite program.When there are two or more factors in the models, we apply a moment relaxation techniqueand approximate the moment cone constraint by a hierarchy of semidefinite-representable outer approximations. When therelaxation hierarchy converges, an optimal discrimination design can be recovered from the optimal moment matrix, and itsoptimality can be additionally confirmed by an equivalence theorem. The methodology is illustrated with several examples
- …