4,407 research outputs found

    Parallel surrogate-assisted global optimization with expensive functions – a survey

    Get PDF
    Surrogate assisted global optimization is gaining popularity. Similarly, modern advances in computing power increasingly rely on parallelization rather than faster processors. This paper examines some of the methods used to take advantage of parallelization in surrogate based global optimization. A key issue focused on in this review is how different algorithms balance exploration and exploitation. Most of the papers surveyed are adaptive samplers that employ Gaussian Process or Kriging surrogates. These allow sophisticated approaches for balancing exploration and exploitation and even allow to develop algorithms with calculable rate of convergence as function of the number of parallel processors. In addition to optimization based on adaptive sampling, surrogate assisted parallel evolutionary algorithms are also surveyed. Beyond a review of the present state of the art, the paper also argues that methods that provide easy parallelization, like multiple parallel runs, or methods that rely on population of designs for diversity deserve more attention.United States. Dept. of Energy (National Nuclear Security Administration. Advanced Simulation and Computing Program. Cooperative Agreement under the Predictive Academic Alliance Program. DE-NA0002378

    Design mining interacting wind turbines

    Get PDF
    © 2016 by the Massachusetts Institute of Technology. An initial study has recently been presented of surrogate-assisted evolutionary algorithms used to design vertical-axis wind turbines wherein candidate prototypes are evaluated under fan-generated wind conditions after being physically instantiated by a 3D printer. Unlike other approaches, such as computational fluid dynamics simulations, no mathematical formulations were used and no model assumptions weremade. This paper extends that work by exploring alternative surrogate modelling and evolutionary techniques. The accuracy of various modelling algorithms used to estimate the fitness of evaluated individuals from the initial experiments is compared. The effect of temporally windowing surrogate model training samples is explored. A surrogateassisted approach based on an enhanced local search is introduced; and alternative coevolution collaboration schemes are examined

    Sequential exploration-exploitation with dynamic trade-off for efficient reliability analysis of complex engineered systems

    Get PDF
    A new sequential sampling method, named sequential exploration-exploitation with dynamic trade-off (SEEDT), is proposed for reliability analysis of complex engineered systems involving high dimensionality and a wide range of reliability levels. The proposed SEEDT method is built based on the ideas of two previously developed sequential Kriging reliability methods, namely efficient global reliability analysis (EGRA) and maximum confidence enhancement (MCE) methods. It employs Kriging-based sequential sampling to build a surrogate model (i.e., Kriging model) that approximates the performance function of an engineered system, and performs Monte Carlo simulation on the surrogate model for reliability analysis. A new acquisition function, referred to as expected utility (EU), is developed to sequentially locate a computationally efficient set of sample points for constructing the Kriging model. The SEEDT method possesses three technical contributions: (i) defining a new utility function with several desirable properties that facilitates the joint consideration of exploration and exploitation over the course of sequential sampling; (ii) introducing a new exploration-exploitation trade-off coefficient that dynamically weighs exploration and exploitation to achieve a fine balance between these two activities; and (iii) developing a new convergence criterion based on the uncertainty in the prediction of the limit-state function (LSF). The effectiveness of the proposed method in reliability analysis is evaluated with several mathematical and practical examples. Results from these examples suggest that, given a certain number of sample points, the SEEDT method is capable of achieving better accuracy in predicting the LSF than the existing sequential sampling methods

    Integrated system to perform surrogate based aerodynamic optimisation for high-lift airfoil

    Get PDF
    This work deals with the aerodynamics optimisation of a generic two-dimensional three element high-lift configuration. Although the high-lift system is applied only during take-off and landing in the low speed phase of the flight the cost efficiency of the airplane is strongly influenced by it [1]. The ultimate goal of an aircraft high lift system design team is to define the simplest configuration which, for prescribed constraints, will meet the take-off, climb, and landing requirements usually expressed in terms of maximum L/D and/or maximum CL. The ability of the calculation method to accurately predict changes in objective function value when gaps, overlaps and element deflections are varied is therefore critical. Despite advances in computer capacity, the enormous computational cost of running complex engineering simulations makes it impractical to rely exclusively on simulation for the purpose of design optimisation. To cut down the cost, surrogate models, also known as metamodels, are constructed from and then used in place of the actual simulation models. This work outlines the development of integrated systems to perform aerodynamics multi-objective optimisation for a three-element airfoil test case in high lift configuration, making use of surrogate models available in MACROS Generic Tools, which has been integrated in our design tool. Different metamodeling techniques have been compared based on multiple performance criteria. With MACROS is possible performing either optimisation of the model built with predefined training sample (GSO) or Iterative Surrogate-Based Optimization (SBO). In this first case the model is build independent from the optimisation and then use it as a black box in the optimisation process. In the second case is needed to provide the possibility to call CFD code from the optimisation process, and there is no need to build any model, it is being built internally during the optimisation process. Both approaches have been applied. A detailed analysis of the integrated design system, the methods as well as th

    Surrogate-based optimisation using adaptively scaled radial basis functions

    Get PDF
    Aerodynamic shape optimisation is widely used in several applications, such as road vehicles, aircraft and trains. This paper investigates the performance of two surrogate-based optimisation methods; a Proper Orthogonal Decomposition-based method and a force-based surrogate model. The generic passenger vehicle DrivAer is used as a test case where the predictive capability of the surrogate in terms of aerodynamic drag is presented. The Proper Orthogonal Decomposition-based method uses simulation results from topologically different meshes by interpolating all solutions to a common mesh for which the decomposition is calculated. Both the Proper Orthogonal Decomposition- and force-based approaches make use of Radial Basis Function interpolation. The Radial Basis Function hyperparameters are optimised using differential evolution. Additionally, the axis scaling is treated as a hyperparameter, which reduces the interpolation error by more than 50% for the investigated test case. It is shown that the force-based approach performs better than the Proper Orthogonal Decomposition method, especially at low sample counts, both with and without adaptive scaling. The sample points, from which the surrogate model is built, are determined using an optimised Latin Hypercube sampling plan. The Latin Hypercube sampling plan is extended to include both continuous and categorical values, which further improve the surrogate\u27s predictive capability when categorical design parameters, such as on/off parameters, are included in the design space. The performance of the force-based surrogate model is compared with four other gradient-free optimisation techniques: Random Sample, Differential Evolution, Nelder–Mead and Bayesian Optimisation. The surrogate model performed as good as, or better than these algorithms, for 17 out of the 18 investigated benchmark problems

    Surrogate Modeling of Ultrasonic Nondestructive Evaluation Simulations

    Get PDF
    Ultrasonic testing (UT) is used to detect internal flaws in materials or to characterize material properties. Computational simulations are an important part of the UT process. Fast models are essential for UT applications such as inverse design or model-assisted probability of detection. This paper presents investigations of using surrogate modeling techniques to create fast approximate models of UT simulator responses. In particular, we propose to use data-driven surrogate modeling techniques (kriging interpolation), and physics-based surrogate modeling techniques (space mapping), as well a mixture of the two approaches. These techniques are investigated for two cases involving UT simulations of metal components immersed in a water bath during the inspection process

    Adaptive swarm optimisation assisted surrogate model for pipeline leak detection and characterisation.

    Get PDF
    Pipelines are often subject to leakage due to ageing, corrosion and weld defects. It is difficult to avoid pipeline leakage as the sources of leaks are diverse. Various pipeline leakage detection methods, including fibre optic, pressure point analysis and numerical modelling, have been proposed during the last decades. One major issue of these methods is distinguishing the leak signal without giving false alarms. Considering that the data obtained by these traditional methods are digital in nature, the machine learning model has been adopted to improve the accuracy of pipeline leakage detection. However, most of these methods rely on a large training dataset for accurate training models. It is difficult to obtain experimental data for accurate model training. Some of the reasons include the huge cost of an experimental setup for data collection to cover all possible scenarios, poor accessibility to the remote pipeline, and labour-intensive experiments. Moreover, datasets constructed from data acquired in laboratory or field tests are usually imbalanced, as leakage data samples are generated from artificial leaks. Computational fluid dynamics (CFD) offers the benefits of providing detailed and accurate pipeline leakage modelling, which may be difficult to obtain experimentally or with the aid of analytical approach. However, CFD simulation is typically time-consuming and computationally expensive, limiting its pertinence in real-time applications. In order to alleviate the high computational cost of CFD modelling, this study proposed a novel data sampling optimisation algorithm, called Adaptive Particle Swarm Optimisation Assisted Surrogate Model (PSOASM), to systematically select simulation scenarios for simulation in an adaptive and optimised manner. The algorithm was designed to place a new sample in a poorly sampled region or regions in parameter space of parametrised leakage scenarios, which the uniform sampling methods may easily miss. This was achieved using two criteria: population density of the training dataset and model prediction fitness value. The model prediction fitness value was used to enhance the global exploration capability of the surrogate model, while the population density of training data samples is beneficial to the local accuracy of the surrogate model. The proposed PSOASM was compared with four conventional sequential sampling approaches and tested on six commonly used benchmark functions in the literature. Different machine learning algorithms are explored with the developed model. The effect of the initial sample size on surrogate model performance was evaluated. Next, pipeline leakage detection analysis - with much emphasis on a multiphase flow system - was investigated in order to find the flow field parameters that provide pertinent indicators in pipeline leakage detection and characterisation. Plausible leak scenarios which may occur in the field were performed for the gas-liquid pipeline using a three-dimensional RANS CFD model. The perturbation of the pertinent flow field indicators for different leak scenarios is reported, which is expected to help in improving the understanding of multiphase flow behaviour induced by leaks. The results of the simulations were validated against the latest experimental and numerical data reported in the literature. The proposed surrogate model was later applied to pipeline leak detection and characterisation. The CFD modelling results showed that fluid flow parameters are pertinent indicators in pipeline leak detection. It was observed that upstream pipeline pressure could serve as a critical indicator for detecting leakage, even if the leak size is small. In contrast, the downstream flow rate is a dominant leakage indicator if the flow rate monitoring is chosen for leak detection. The results also reveal that when two leaks of different sizes co-occur in a single pipe, detecting the small leak becomes difficult if its size is below 25% of the large leak size. However, in the event of a double leak with equal dimensions, the leak closer to the pipe upstream is easier to detect. The results from all the analyses demonstrate the PSOASM algorithm's superiority over the well-known sequential sampling schemes employed for evaluation. The test results show that the PSOASM algorithm can be applied for pipeline leak detection with limited training datasets and provides a general framework for improving computational efficiency using adaptive surrogate modelling in various real-life applications

    Evidence on women trafficked for sexual exploitation: A rights based analysis

    Get PDF
    The aim of this paper is to investigate which factors influence the pattern of enforcement (violation) of basic rights among women trafficked for sexual exploitation. A conceptual frameworkis adopted where the degree of agency and the possibility to influence the terms of sex-based transactions are seen as conditional on the enforcement of some basic rights. Using IOM data on women assisted in exiting from trafficking for sexual exploitation, we investigate the enforcement (violation) of five uncompromisable rights, namely the right to physical integrity, to move freely, to have access to medical care, to use condoms, and to exercise choice over sexual services. By combining classification trees analysis and ordered probit estimation we find that working location and country of work are the main determinants of rights enforcement, while individual and family characteristics play a marginal role. Specifically, we find that (i) in lower market segments working on the street is comparatively less ‘at risk’ of rights violation; (ii) there is no consistently ‘good’ or ‘bad’ country of work, but public awareness on trafficking within the country is important; (iii) the strength of organized crime in the country of work matters only in conjunction with other local factors, and (iv) being trafficked within one’s country, as opposed to being trafficked internationally, is associated with higher risk of rights violationhuman trafficking, sexual exploitation, basic rights, classification and regression trees, ordered probit

    Bayesian Optimization Approach for Analog Circuit Synthesis Using Neural Network

    Full text link
    Bayesian optimization with Gaussian process as surrogate model has been successfully applied to analog circuit synthesis. In the traditional Gaussian process regression model, the kernel functions are defined explicitly. The computational complexity of training is O(N 3 ), and the computation complexity of prediction is O(N 2 ), where N is the number of training data. Gaussian process model can also be derived from a weight space view, where the original data are mapped to feature space, and the kernel function is defined as the inner product of nonlinear features. In this paper, we propose a Bayesian optimization approach for analog circuit synthesis using neural network. We use deep neural network to extract good feature representations, and then define Gaussian process using the extracted features. Model averaging method is applied to improve the quality of uncertainty prediction. Compared to Gaussian process model with explicitly defined kernel functions, the neural-network-based Gaussian process model can automatically learn a kernel function from data, which makes it possible to provide more accurate predictions and thus accelerate the follow-up optimization procedure. Also, the neural-network-based model has O(N) training time and constant prediction time. The efficiency of the proposed method has been verified by two real-world analog circuits
    • 

    corecore