627 research outputs found

    Development of the D-Optimality-Based Coordinate-Exchange Algorithm for an Irregular Design Space and the Mixed-Integer Nonlinear Robust Parameter Design Optimization

    Get PDF
    Robust parameter design (RPD), originally conceptualized by Taguchi, is an effective statistical design method for continuous quality improvement by incorporating product quality into the design of processes. The primary goal of RPD is to identify optimal input variable level settings with minimum process bias and variation. Because of its practicality in reducing inherent uncertainties associated with system performance across key product and process dimensions, the widespread application of RPD techniques to many engineering and science fields has resulted in significant improvements in product quality and process enhancement. There is little disagreement among researchers about Taguchi\u27s basic philosophy. In response to apparent mathematical flaws surrounding his original version of RPD, researchers have closely examined alternative approaches by incorporating well-established statistical methods, particularly the response surface methodology (RSM), while accepting the main philosophy of his RPD concepts. This particular RSM-based RPD method predominantly employs the central composite design technique with the assumption that input variables are quantitative on a continuous scale. There is a large number of practical situations in which a combination of input variables is of real-valued quantitative variables on a continuous scale and qualitative variables such as integer- and binary-valued variables. Despite the practicality of such cases in real-world engineering problems, there has been little research attempt, if any, perhaps due to mathematical hurdles in terms of inconsistencies between a design space in the experimental phase and a solution space in the optimization phase. For instance, the design space associated with the central composite design, which is perhaps known as the most effective response surface design for a second-order prediction model, is typically a bounded convex feasible set involving real numbers due to its inherent real-valued axial design points; however, its solution space may consist of integer and real values. Along the lines, this dissertation proposes RPD optimization models under three different scenarios. Given integer-valued constraints, this dissertation discusses why the Box-Behnken design is preferred over the central composite design and other three-level designs, while maintaining constant or nearly constant prediction variance, called the design rotatability, associated with a second-order model. Box-Behnken design embedded mixed integer nonlinear programming models are then proposed. As a solution method, the Karush-Kuhn-Tucker conditions are developed and the sequential quadratic integer programming technique is also used. Further, given binary-valued constraints, this dissertation investigates why neither the central composite design nor the Box-Behnken design is effective. To remedy this potential problem, several 0-1 mixed integer nonlinear programming models are proposed by laying out the foundation of a three-level factorial design with pseudo center points. For these particular models, we use standard optimization methods such as the branch-and-bound technique, the outer approximation method, and the hybrid nonlinear based branch-and-cut algorithm. Finally, there exist some special situations during the experimental phase where the situation may call for reducing the number of experimental runs or using a reduced regression model in fitting the data. Furthermore, there are special situations where the experimental design space is constrained, and therefore optimal design points should be generated. In these particular situations, traditional experimental designs may not be appropriate. D-optimal experimental designs are investigated and incorporated into nonlinear programming models, as the design region is typically irregular which may end up being a convex problem. It is believed that the research work contained in this dissertation is the initial examination in the related literature and makes a considerable contribution to an existing body of knowledge by filling research gaps

    Application of Permutation Genetic Algorithm for Sequential Model Building–Model Validation Design of Experiments

    Get PDF
    YesThe work presented in this paper is motivated by a complex multivariate engineering problem associated with engine mapping experiments, which require efficient Design of Experiment (DoE) strategies to minimise expensive testing. The paper describes the development and evaluation of a Permutation Genetic Algorithm (PermGA) to support an exploration-based sequential DoE strategy for complex real-life engineering problems. A known PermGA was implemented to generate uniform OLH DoEs, and substantially extended to support generation of Model Building–Model Validation (MB-MV) sequences, by generating optimal infill sets of test points as OLH DoEs, that preserve good space filling and projection properties for the merged MB + MV test plan. The algorithm was further extended to address issues with non-orthogonal design spaces, which is a common problem in engineering applications. The effectiveness of the PermGA algorithm for the MB-MV OLH DoE sequence was evaluated through a theoretical benchmark problem based on the Six-Hump-Camel-Back (SHCB) function, as well as the Gasoline Direct Injection (GDI) engine steady state engine mapping problem that motivated this research. The case studies show that the algorithm is effective at delivering quasi-orthogonal space-filling DoEs with good properties even after several MB-MV iterations, while the improvement in model adequacy and accuracy can be monitored by the engineering analyst. The practical importance of this work, demonstrated through the engine case study, also is that significant reduction in the effort and cost of testing can be achieved.The research work presented in this paper was funded by the UK Technology Strategy Board (TSB) through the Carbon Reduction through Engine Optimization (CREO) project

    The Kalai-Smorodinski solution for many-objective Bayesian optimization

    Get PDF
    An ongoing aim of research in multiobjective Bayesian optimization is to extend its applicability to a large number of objectives. While coping with a limited budget of evaluations, recovering the set of optimal compromise solutions generally requires numerous observations and is less interpretable since this set tends to grow larger with the number of objectives. We thus propose to focus on a specific solution originating from game theory, the Kalai-Smorodinsky solution, which possesses attractive properties. In particular, it ensures equal marginal gains over all objectives. We further make it insensitive to a monotonic transformation of the objectives by considering the objectives in the copula space. A novel tailored algorithm is proposed to search for the solution, in the form of a Bayesian optimization algorithm: sequential sampling decisions are made based on acquisition functions that derive from an instrumental Gaussian process prior. Our approach is tested on four problems with respectively four, six, eight, and nine objectives. The method is available in the Rpackage GPGame available on CRAN at https://cran.r-project.org/package=GPGame

    Solving, Estimating and Selecting Nonlinear Dynamic Economic Models without the Curse of Dimensionality

    Get PDF
    A welfare analysis of a risky policy is impossible within a linear or linearized model and its certainty equivalence property. The presented algorithms are designed as a toolbox for a general model class. The computational challenges are considerable and I concentrate on the numerics and statistics for a simple model of dynamic consumption and labor choice. I calculate the optimal policy and estimate the posterior density of structural parameters and the marginal likelihood within a nonlinear state space model. My approach is even in an interpreted language twenty time faster than the only alternative compiled approach. The model is estimated on simulated data in order to test the routines against known true parameters. The policy function is approximated by Smolyak Chebyshev polynomials and the rational expectation integral by Smolyak Gaussian quadrature. The Smolyak operator is used to extend univariate approximation and integration operators to many dimensions. It reduces the curse of dimensionality from exponential to polynomial growth. The likelihood integrals are evaluated by a Gaussian quadrature and Gaussian quadrature particle filter. The bootstrap or sequential importance resampling particle filter is used as an accuracy benchmark. The posterior is estimated by the Gaussian filter and a Metropolis- Hastings algorithm. I propose a genetic extension of the standard Metropolis-Hastings algorithm by parallel random walk sequences. This improves the robustness of start values and the global maximization properties. Moreover it simplifies a cluster implementation and the random walk variances decision is reduced to only two parameters so that almost no trial sequences are needed. Finally the marginal likelihood is calculated as a criterion for nonnested and quasi-true models in order to select between the nonlinear estimates and a first order perturbation solution combined with the Kalman filter.stochastic dynamic general equilibrium model, Chebyshev polynomials, Smolyak operator, nonlinear state space filter, Curse of Dimensionality, posterior of structural parameters, marginal likelihood

    An efficient algorithm for nonlinear integer programming

    Get PDF
    M.Sc., Faculty of Sciences, University of the Witwatersrand, 2011Abstract This dissertation is concerned with discrete global optimization of nonlinear problems. These problems are constrained and unconstrained and are not easily solvable since there exists multiplicity of local and global minima. In this dissertation, we study the current methods for solving such problems and highlight their ine ciencies. We introduce a new local search procedure. We study the rapidly-exploring random tree (RRT) method, found mostly in the research area of robotics. We then design two global optimization algorithms based on RRT. RRT has never been used in the eld of global optimization. We exploit its attractive properties to develop two new algorithms for solving the discrete nonlinear optimization problems. The rst method is called RRT-Optimizer and is denoted as RRTOpt. RRTOpt is then modi ed to include probabilistic elements within the RRT. We have denoted this method by RRTOptv1. Results are generated for both methods and numerical comparisons are made with a number of recent methods

    Testing Nelder-Mead based repulsion algorithms for multiple roots of nonlinear systems via a two-level factorial design of experiments

    Get PDF
    This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.Fundação para a Ciência e Tecnologia (FCT
    • …
    corecore