1,427 research outputs found

    A constrained multi-objective surrogate-based optimization algorithm

    Get PDF
    Surrogate models or metamodels are widely used in the realm of engineering for design optimization to minimize the number of computationally expensive simulations. Most practical problems often have conflicting objectives, which lead to a number of competing solutions which form a Pareto front. Multi-objective surrogate-based constrained optimization algorithms have been proposed in literature, but handling constraints directly is a relatively new research area. Most algorithms proposed to directly deal with multi-objective optimization have been evolutionary algorithms (Multi-Objective Evolutionary Algorithms -MOEAs). MOEAs can handle large design spaces but require a large number of simulations, which might be infeasible in practice, especially if the constraints are expensive. A multi-objective constrained optimization algorithm is presented in this paper which makes use of Kriging models, in conjunction with multi-objective probability of improvement (PoI) and probability of feasibility (PoF) criteria to drive the sample selection process economically. The efficacy of the proposed algorithm is demonstrated on an analytical benchmark function, and the algorithm is then used to solve a microwave filter design optimization problem

    Efficient Covariance Matrix Update for Variable Metric Evolution Strategies

    Get PDF
    International audienceRandomized direct search algorithms for continuous domains, such as Evolution Strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as model selection. These randomized search strategies often rely on normally distributed additive variations of candidate solutions. In order to efficiently search in non-separable and ill-conditioned landscapes the covariance matrix of the normal distribution must be adapted, amounting to a variable metric method. Consequently, Covariance Matrix Adaptation (CMA) is considered state-of-the-art in Evolution Strategies. In order to sample the normal distribution, the adapted covariance matrix needs to be decomposed, requiring in general Θ(n3)\Theta(n^3) operations, where nn is the search space dimension. We propose a new update mechanism which can replace a rank-one covariance matrix update and the computationally expensive decomposition of the covariance matrix. The newly developed update rule reduces the computational complexity of the rank-one covariance matrix adaptation to Θ(n2)\Theta(n^2) without resorting to outdated distributions. We derive new versions of the elitist Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and the multi-objective CMA-ES. These algorithms are equivalent to the original procedures except that the update step for the variable metric distribution scales better in the problem dimension. We also introduce a simplified variant of the non-elitist CMA-ES with the incremental covariance matrix update and investigate its performance. Apart from the reduced time-complexity of the distribution update, the algebraic computations involved in all new algorithms are simpler compared to the original versions. The new update rule improves the performance of the CMA-ES for large scale machine learning problems in which the objective function can be evaluated fast

    DOAM for Evolutionary Portfolio Optimization: a computational study.

    Get PDF
    In this work, the ability of the Dynamic Objectives Aggregation Methods to solve the portfolio rebalancing problem is investigated conducting a computational study on a set of instances based on real data. The portfolio model considers a set of realistic constraints and entails the simultaneously optimization of the risk on portfolio, the expected return and the transaction cost.

    Optimizing the DFCN Broadcast Protocol with a Parallel Cooperative Strategy of Multi-Objective Evolutionary Algorithms

    Get PDF
    Proceeding of: 5th International Conference, EMO 2009, Nantes, France, April 7-10, 2009This work presents the application of a parallel coopera- tive optimization approach to the broadcast operation in mobile ad-hoc networks (manets). The optimization of the broadcast operation im- plies satisfying several objectives simultaneously, so a multi-objective approach has been designed. The optimization lies on searching the best configurations of the dfcn broadcast protocol for a given manet sce- nario. The cooperation of a team of multi-objective evolutionary al- gorithms has been performed with a novel optimization model. Such model is a hybrid parallel algorithm that combines a parallel island- based scheme with a hyperheuristic approach. Results achieved by the algorithms in different stages of the search process are analyzed in order to grant more computational resources to the most suitable algorithms. The obtained results for a manets scenario, representing a mall, demon- strate the validity of the new proposed approach.This work has been supported by the ec (feder) and the Spanish Ministry of Education and Science inside the ‘Plan Nacional de i+d+i’ (tin2005-08818-c04) and (tin2008-06491-c04-02). The work of Gara Miranda has been developed under grant fpu-ap2004-2290.Publicad

    Automatic surrogate model type selection during the optimization of expensive black-box problems

    Get PDF
    The use of Surrogate Based Optimization (SBO) has become commonplace for optimizing expensive black-box simulation codes. A popular SBO method is the Efficient Global Optimization (EGO) approach. However, the performance of SBO methods critically depends on the quality of the guiding surrogate. In EGO the surrogate type is usually fixed to Kriging even though this may not be optimal for all problems. In this paper the authors propose to extend the well-known EGO method with an automatic surrogate model type selection framework that is able to dynamically select the best model type (including hybrid ensembles) depending on the data available so far. Hence, the expected improvement criterion will always be based on the best approximation available at each step of the optimization process. The approach is demonstrated on a structural optimization problem, i.e., reducing the stress on a truss-like structure. Results show that the proposed algorithm consequently finds better optimums than traditional kriging-based infill optimization

    Stopping criteria for genetic improvement software for beef-cattle mating selection

    Get PDF
    O objetivo deste trabalho foi propor um novo critério de parada para diminuir o tempo de processamento do programa de melhoramento genético PampaPlus, além de maximizar o índice de qualificação genética (GQI) da progênie, controlar a endogamia e evitar o descarte não intencional. Foram utilizados dados de dois rebanhos integrantes do PampaPlus. Cinco cenários de acasalamento foram elaborados com diferentes números de touros (9 a 37) e vacas (142 a 568). Os dados analisados foram: diferenças esperadas na progênie, informações de pedigree, máxima endogamia, número máximo e mínimo de acasalamentos por touro, e penalidades para desempenho inferior. As variáveis analisadas foram tempo de processamento e o GQI das progênies. Foram utilizados três critérios de parada: critério de parada original, fixado em 1.000 iterações; critério de parada por saturação (SSC), baseado na variância do GQI; e critério de parada de Bhandari (BSC), que inclui o parâmetro de intervalo de gerações. O SSC e o BSC reduziram o tempo de processamento em 24,43–53,64% e em 14,32–50,87%, respectivamente. O BSC atinge solução em menos tempo, sem perda da qualidade do GQI. O BSC é generalizável e efetivo em reduzir o tempo de processamento das recomendações de acasalamento.The objective of this work was to propose a new stopping criterion to shorten the computing time of the PampaPlus genetic improvement software, while maximizing the genetic qualification index (GQI) of the progeny, controlling inbreeding, and avoiding unintended culling. Data from two beef-cattle herds integrating PampaPlus were used. Five mating scenarios were built using different numbers of sires (9 to 37) and dams (142 to 568). The analyzed algorithm inputs were: expected progeny differences, pedigree information, maximum inbreeding, maximum and minimum number of matches for each sire, and penalty weights for poor performance. The analyzed response variables were computing time and the GQI of the progenies. Three stopping criteria were used: original stopping criterion fixed at 1,000 iterations; saturation stopping criterion (SSC), based on GQI variance; and Bhandari’s stopping criterion (BSC), which includes the generation interval parameter. SSC and BSC reduced processing time in 24.43–53.64% and in 14.32–50.87%, respectively. BSC reaches solution in less time, without losses in GQI quality. BSC is generalizable and effective to reduce the processing time of mating recommendations

    Multi-criteria Evolution of Neural Network Topologies: Balancing Experience and Performance in Autonomous Systems

    Full text link
    Majority of Artificial Neural Network (ANN) implementations in autonomous systems use a fixed/user-prescribed network topology, leading to sub-optimal performance and low portability. The existing neuro-evolution of augmenting topology or NEAT paradigm offers a powerful alternative by allowing the network topology and the connection weights to be simultaneously optimized through an evolutionary process. However, most NEAT implementations allow the consideration of only a single objective. There also persists the question of how to tractably introduce topological diversification that mitigates overfitting to training scenarios. To address these gaps, this paper develops a multi-objective neuro-evolution algorithm. While adopting the basic elements of NEAT, important modifications are made to the selection, speciation, and mutation processes. With the backdrop of small-robot path-planning applications, an experience-gain criterion is derived to encapsulate the amount of diverse local environment encountered by the system. This criterion facilitates the evolution of genes that support exploration, thereby seeking to generalize from a smaller set of mission scenarios than possible with performance maximization alone. The effectiveness of the single-objective (optimizing performance) and the multi-objective (optimizing performance and experience-gain) neuro-evolution approaches are evaluated on two different small-robot cases, with ANNs obtained by the multi-objective optimization observed to provide superior performance in unseen scenarios

    Optimization method for the determination of material parameters in damaged composite structures

    Get PDF
    An optimization method to identify the material parameters of composite structures using an inverse method is proposed. This methodology compares experimental results with their numerical reproduction using the finite element method in order to obtain an estimation of the error between the results. This error estimation is then used by an evolutionary optimizer to determine, in an iterative process, the value of the material parameters which result in the best numerical fit. The novelty of the method is in the coupling between the simple genetic algorithm and the mixing theory used to numerically reproduce the composite behavior. The methodology proposed has been validated through a simple example which illustrates the exploitability of the method in relation to the modeling of damaged composite structures.Peer ReviewedPostprint (author’s final draft

    The Novel Approach of Adaptive Twin Probability for Genetic Algorithm

    Full text link
    The performance of GA is measured and analyzed in terms of its performance parameters against variations in its genetic operators and associated parameters. Since last four decades huge numbers of researchers have been working on the performance of GA and its enhancement. This earlier research work on analyzing the performance of GA enforces the need to further investigate the exploration and exploitation characteristics and observe its impact on the behavior and overall performance of GA. This paper introduces the novel approach of adaptive twin probability associated with the advanced twin operator that enhances the performance of GA. The design of the advanced twin operator is extrapolated from the twin offspring birth due to single ovulation in natural genetic systems as mentioned in the earlier works. The twin probability of this operator is adaptively varied based on the fitness of best individual thereby relieving the GA user from statically defining its value. This novel approach of adaptive twin probability is experimented and tested on the standard benchmark optimization test functions. The experimental results show the increased accuracy in terms of the best individual and reduced convergence time.Comment: 7 pages, International Journal of Advanced Studies in Computer Science and Engineering (IJASCSE), Volume 2, Special Issue 2, 201
    • …
    corecore