18 research outputs found

    Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control

    Get PDF
    Constrained optimization of high-dimensional numerical problems plays an important role in many scientific and industrial applications. Function evaluations in many industrial applications are severely limited and no analytical information about objective function and constraint functions is available. For such expensive black-box optimization tasks, the constraint optimization algorithm COBRA was proposed, making use of RBF surrogate modeling for both the objective and the constraint functions. COBRA has shown remarkable success in solving reliably complex benchmark problems in less than 500 function evaluations. Unfortunately, COBRA requires careful adjustment of parameters in order to do so. In this work we present a new self-adjusting algorithm SACOBRA, which is based on COBRA and capable to achieve high-quality results with very few function evaluations and no parameter tuning. It is shown with the help of performance profiles on a set of benchmark problems (G-problems, MOPTA08) that SACOBRA consistently outperforms any COBRA algorithm with fixed parameter setting. We analyze the importance of the several new elements in SACOBRA and find that each element of SACOBRA plays a role to boost up the overall optimization performance. We discuss the reasons behind and get in this way a better understanding of high-quality RBF surrogate modeling

    A Random Forest Assisted Evolutionary Algorithm for Data-Driven Constrained Multi-Objective Combinatorial Optimization of Trauma Systems for publication

    Get PDF
    Many real-world optimization problems can be solved by using the data-driven approach only, simply because no analytic objective functions are available for evaluating candidate solutions. In this work, we address a class of expensive datadriven constrained multi-objective combinatorial optimization problems, where the objectives and constraints can be calculated only on the basis of large amount of data. To solve this class of problems, we propose to use random forests and radial basis function networks as surrogates to approximate both objective and constraint functions. In addition, logistic regression models are introduced to rectify the surrogate-assisted fitness evaluations and a stochastic ranking selection is adopted to further reduce the influences of the approximated constraint functions. Three variants of the proposed algorithm are empirically evaluated on multi-objective knapsack benchmark problems and two realworld trauma system design problems. Experimental results demonstrate that the variant using random forest models as the surrogates are effective and efficient in solving data-driven constrained multi-objective combinatorial optimization problems

    A Bayesian Approach to Computer Model Calibration and Model-Assisted Design

    Get PDF
    Computer models of phenomena that are difficult or impossible to study directly are critical for enabling research and assisting design in many areas. In order to be effective, computer models must be calibrated so that they accurately represent the modeled phenomena. There exists a rich variety of methods for computer model calibration that have been developed in recent decades. Among the desiderata of such methods is a means of quantifying remaining uncertainty after calibration regarding both the values of the calibrated model inputs and the model outputs. Bayesian approaches to calibration have met this need in recent decades. However, limitations remain. Whereas in model calibration one finds point estimates or distributions of calibration inputs in order to induce the model to reflect reality accurately, interest in a computer model often centers primarily on its use for model-assisted design, in which the goal is to find values for design inputs to induce the modeled system to approximate some target outcome. Existing Bayesian approaches are limited to the first of these two tasks. The present work develops an approach adapting Bayesian methods for model calibration for application in model-assisted design. The approach retains the benefits of Bayesian calibration in accounting for and quantifying all sources of uncertainty. It is capable of generating a comprehensive assessment of the Pareto optimal inputs for a multi-objective optimization problem. The present work shows that this approach can apply as a method for model-assisted design using a previously calibrated system, and can also serve as a method for model-assisted design using a model that still requires calibration, accomplishing both ends simultaneously

    A Generalized Method for Efficient Global Optimization of Antenna Design

    Get PDF
    Efficiency improvement is of great significance for simulation-driven antenna design optimization methods based on evolutionary algorithms (EAs). The two main efficiency enhancement methods exploit data-driven surrogate models and/or multi-fidelity simulation models to assist EAs. However, optimization methods based on the latter either need ad hoc low-fidelity model setup or have difficulties in handling problems with more than a few design variables, which is a main barrier for industrial applications. To address this issue, a generalized three stage multi-fidelity-simulation-model assisted antenna design optimization framework is proposed in this paper. The main ideas include introduction of a novel data mining stage handling the discrepancy between simulation models of different fidelities, and a surrogate-model-assisted combined global and local search stage for efficient high-fidelity simulation model-based optimization. This framework is then applied to SADEA, which is a state-of-the-art surrogate-model-assisted antenna design optimization method, constructing SADEA-II. Experimental results indicate that SADEA-II successfully handles various discrepancy between simulation models and considerably outperforms SADEA in terms of computational efficiency while ensuring improved design quality

    Towards an evolvable cancer treatment simulator

    Get PDF
    © 2019 Elsevier B.V. The use of high-fidelity computational simulations promises to enable high-throughput hypothesis testing and optimisation of cancer therapies. However, increasing realism comes at the cost of increasing computational requirements. This article explores the use of surrogate-assisted evolutionary algorithms to optimise the targeted delivery of a therapeutic compound to cancerous tumour cells with the multicellular simulator, PhysiCell. The use of both Gaussian process models and multi-layer perceptron neural network surrogate models are investigated. We find that evolutionary algorithms are able to effectively explore the parameter space of biophysical properties within the agent-based simulations, minimising the resulting number of cancerous cells after a period of simulated treatment. Both model-assisted algorithms are found to outperform a standard evolutionary algorithm, demonstrating their ability to perform a more effective search within the very small evaluation budget. This represents the first use of efficient evolutionary algorithms within a high-throughput multicellular computing approach to find therapeutic design optima that maximise tumour regression

    Bi-fidelity Evolutionary Multiobjective Search for Adversarially Robust Deep Neural Architectures

    Full text link
    Deep neural networks have been found vulnerable to adversarial attacks, thus raising potentially concerns in security-sensitive contexts. To address this problem, recent research has investigated the adversarial robustness of deep neural networks from the architectural point of view. However, searching for architectures of deep neural networks is computationally expensive, particularly when coupled with adversarial training process. To meet the above challenge, this paper proposes a bi-fidelity multiobjective neural architecture search approach. First, we formulate the NAS problem for enhancing adversarial robustness of deep neural networks into a multiobjective optimization problem. Specifically, in addition to a low-fidelity performance predictor as the first objective, we leverage an auxiliary-objective -- the value of which is the output of a surrogate model trained with high-fidelity evaluations. Secondly, we reduce the computational cost by combining three performance estimation methods, i.e., parameter sharing, low-fidelity evaluation, and surrogate-based predictor. The effectiveness of the proposed approach is confirmed by extensive experiments conducted on CIFAR-10, CIFAR-100 and SVHN datasets

    Boosting data-driven evolutionary algorithm with localized data generation

    Get PDF
    By efficiently building and exploiting surrogates, data-driven evolutionary algorithms (DDEAs) can be very helpful in solving expensive and computationally intensive problems. However, they still often suffer from two difficulties. First, many existing methods for building a single ad hoc surrogate are suitable for some special problems but may not work well on some other problems. Second, the optimization accuracy of DDEAs deteriorates if available data are not enough for building accurate surrogates, which is common in expensive optimization problems. To this end, this article proposes a novel DDEA with two efficient components. First, a boosting strategy (BS) is proposed for self-aware model managements, which can iteratively build and combine surrogates to obtain suitable surrogate models for different problems. Second, a localized data generation (LDG) method is proposed to generate synthetic data to alleviate data shortage and increase data quantity, which is achieved by approximating fitness through data positions. By integrating the BS and the LDG, the BDDEA-LDG algorithm is able to improve model accuracy and data quantity at the same time automatically according to the problems at hand. Besides, a tradeoff is empirically considered to strike a better balance between the effectiveness of surrogates and the time cost for building them. The experimental results show that the proposed BDDEA-LDG algorithm can generally outperform both traditional methods without surrogates and other state-of-the-art DDEA son widely used benchmarks and an arterial traffic signal timing real-world optimization problem. Furthermore, the proposed BDDEA-LDG algorithm can use only about 2% computational budgets of traditional methods for producing competitive results
    corecore