57 research outputs found

    Efficient use of partially converged simulations in evolutionary optimization

    Get PDF
    For many real-world optimization problems, evaluating a solution involves running a computationally expensive simulation model. This makes it challenging to use evolutionary algorithms which usually have to evaluate thousands of solutions before converging. On the other hand, in many cases, even a prematurely stopped run of the simulation may serve as a cheaper, albeit less accurate (low fidelity), estimate of the true fitness value. For evolutionary optimization, this opens up the opportunity to decide about the simulation run length for each individual. In this paper, we propose a mechanism that is capable of learning the appropriate simulation run length for each solution. To test our approach, we propose two new benchmark problems, one simple artificial benchmark function and one benchmark based on a computational fluid dynamics simulation scenario to design a toy submarine. As we demonstrate, our proposed algorithm finds good solutions much faster than always using the full computational fluid dynamics simulation and provides much better solution quality than a strategy of progressively increasing the fidelity level over the course of optimization

    Stochastic Fractal Based Multiobjective Fruit Fly Optimization

    Get PDF
    The fruit fly optimization algorithm (FOA) is a global optimization algorithm inspired by the foraging behavior of a fruit fly swarm. In this study, a novel stochastic fractal model based fruit fly optimization algorithm is proposed for multiobjective optimization. A food source generating method based on a stochastic fractal with an adaptive parameter updating strategy is introduced to improve the convergence performance of the fruit fly optimization algorithm. To deal with multiobjective optimization problems, the Pareto domination concept is integrated into the selection process of fruit fly optimization and a novel multiobjective fruit fly optimization algorithm is then developed. Similarly to most of other multiobjective evolutionary algorithms (MOEAs), an external elitist archive is utilized to preserve the nondominated solutions found so far during the evolution, and a normalized nearest neighbor distance based density estimation strategy is adopted to keep the diversity of the external elitist archive. Eighteen benchmarks are used to test the performance of the stochastic fractal based multiobjective fruit fly optimization algorithm (SFMOFOA). Numerical results show that the SFMOFOA is able to well converge to the Pareto fronts of the test benchmarks with good distributions. Compared with four state-of-the-art methods, namely, the non-dominated sorting generic algorithm (NSGA-II), the strength Pareto evolutionary algorithm (SPEA2), multi-objective particle swarm optimization (MOPSO), and multiobjective self-adaptive differential evolution (MOSADE), the proposed SFMOFOA has better or competitive multiobjective optimization performance

    Multiobjective evolutionary algorithm based on vector angle neighborhood

    Get PDF
    Selection is a major driving force behind evolution and is a key feature of multiobjective evolutionary algorithms. Selection aims at promoting the survival and reproduction of individuals that are most fitted to a given environment. In the presence of multiple objectives, major challenges faced by this operator come from the need to address both the population convergence and diversity, which are conflicting to a certain extent. This paper proposes a new selection scheme for evolutionary multiobjective optimization. Its distinctive feature is a similarity measure for estimating the population diversity, which is based on the angle between the objective vectors. The smaller the angle, the more similar individuals. The concept of similarity is exploited during the mating by defining the neighborhood and the replacement by determining the most crowded region where the worst individual is identified. The latter is performed on the basis of a convergence measure that plays a major role in guiding the population towards the Pareto optimal front. The proposed algorithm is intended to exploit strengths of decomposition-based approaches in promoting diversity among the population while reducing the user's burden of specifying weight vectors before the search. The proposed approach is validated by computational experiments with state-of-the-art algorithms on problems with different characteristics. The obtained results indicate a highly competitive performance of the proposed approach. Significant advantages are revealed when dealing with problems posing substantial difficulties in keeping diversity, including many-objective problems. The relevance of the suggested similarity and convergence measures are shown. The validity of the approach is also demonstrated on engineering problems.This work was supported by the Portuguese Fundacao para a Ciencia e Tecnologia under grant PEst-C/CTM/LA0025/2013 (Projecto Estrategico - LA 25 - 2013-2014 - Strategic Project - LA 25 - 2013-2014).info:eu-repo/semantics/publishedVersio

    Evidence-based robust optimization of pulsed laser orbital debris removal under epistemic uncertainty

    Get PDF
    An evidence-based robust optimization method for pulsed laser orbital debris removal (LODR) is presented. Epistemic type uncertainties due to limited knowledge are considered. The objective of the design optimization is set to minimize the debris lifetime while at the same time maximizing the corresponding belief value. The Dempster–Shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used to model and compute the uncertainty impacts. A Kriging based surrogate is used to reduce the cost due to the expensive numerical life prediction model. Effectiveness of the proposed method is illustrated by a set of benchmark problems. Based on the method, a numerical simulation of the removal of Iridium 33 with pulsed lasers is presented, and the most robust solutions with minimum lifetime under uncertainty are identified using the proposed method

    Improving the multiobjective evolutionary algorithm based on decomposition with new penalty schemes

    Get PDF
    It has been increasingly reported that the multiobjective optimization evolutionary algorithm based on decomposition (MOEA/D) is promising for handling multiobjective optimization problems (MOPs). MOEA/D employs scalarizing functions to convert an MOP into a number of single-objective subproblems. Among them, penalty boundary intersection (PBI) is one of the most popular decomposition approaches and has been widely adopted for dealing with MOPs. However, the original PBI uses a constant penalty value for all subproblems and has difficulties in achieving a good distribution and coverage of the Pareto front for some problems. In this paper, we investigate the influence of the penalty factor on PBI, and suggest two new penalty schemes, i.e., adaptive penalty scheme and subproblem-based penalty scheme (SPS), to enhance the spread of Pareto-optimal solutions. The new penalty schemes are examined on several complex MOPs, showing that PBI with the use of them is able to provide a better approximation of the Pareto front than the original one. The SPS is further integrated into two recently developed MOEA/D variants to help balance the population diversity and convergence. Experimental results show that it can significantly enhance the algorithm�s performance. © 2016, Springer-Verlag Berlin Heidelberg

    An incremental ensemble classifier learning by means of a rule-based accuracy and diversity comparison

    No full text
    In this paper, we propose an incremental ensemble classifier learning method. In the proposed method, a set of accurate and diverse classifiers are generated and added to the ensemble by means of accuracy and diversity comparison. The selection of classifiers in ensemble starts with a layer (where data is partitioned into any given number of clusters and fed to a set of base classifiers) and then continues to improve the bias-variance (i.e., accuracy and diversity). Optimal ensemble classifier selection is done through accuracy-precedence-diversity comparison, i.e., a model with better accuracy is preferred but in the case of models with the same accuracy, better diversity is preferred. The comparison is made on the class decomposed accuracies (i.e., all class accuracies are decomposed to a scalar value). A non-identical set of base classifiers is trained on the clusters of data in a layer and the center of the cluster is recorded as an identifier to the corresponding base classifiers set. Decisions from multiple base classifiers are fused to an ensemble class output using majority voting for each pattern and finally the decisions across multiple layers are combined using majority voting. The proposed method is evaluated on UCI benchmark datasets and compared with the recently proposed ensemble classifiers including the Bagging and Boosting. Through comparison, we demonstrate that the proposed method improves the performance of the base classifiers and performs better than the existing ensemble methods

    Impact of automatic feature extraction in deep learning architecture

    No full text
    © 2016 IEEE.This paper presents the impact of automatic feature extraction used in a deep learning architecture such as Convolutional Neural Network (CNN). Recently CNN has become a very popular tool for image classification which can automatically extract features, learn and classify them. It is a common belief that CNN can always perform better than other well-known classifiers. However, there is no systematic study which shows that automatic feature extraction in CNN is any better than other simple feature extraction techniques, and there is no study which shows that other simple neural network architectures cannot achieve same accuracy as CNN. In this paper, a systematic study to investigate CNN's feature extraction is presented. CNN with automatic feature extraction is firstly evaluated on a number of benchmark datasets and then a simple traditional Multi-Layer Perceptron (MLP) with full image, and manual feature extraction are evaluated on the same benchmark datasets. The purpose is to see whether feature extraction in CNN performs any better than a simple feature with MLP and full image with MLP. Many experiments were systematically conducted by varying number of epochs and hidden neurons. The experimental results revealed that traditional MLP with suitable parameters can perform as good as CNN or better in certain cases

    A Divide-and-Conquer-Based Ensemble Classifier Learning by Means of Many-Objective Optimization

    No full text

    A divide-and-conquer based ensemble classifier learning by means of many-objective optimization

    No full text
    Verma, B ORCiD: 0000-0002-4618-0479IEEE Divide-and-conquer based methods are quite successful across various problems from different disciplines. These methods divide a complex task into multiple simple tasks and solve them collectively. This paper presents a divide-and-conquer based hierarchical optimization framework for ensemble classifier learning. The optimization framework includes a search space creation process (called Data Training Environments (DTE)) that divides the data into multiple clusters, and then trains a set of heterogeneous base classifiers with the DTEs. The classifiers are then combined to form an optimal ensemble, by finding the fittest ones using many-objective optimization. The many-objective optimization algorithm considers each class accuracy as a separate objective and maximizes the class accuracies. An additional objective is also taken into account by maximizing the ensemble size. Since the partitioning of data creates diversity within the pool of classifiers, class accuracy trade-off among the classifiers is observed. As a result, increasing the number of classifiers also increases the diversity within the ensemble. In order to tackle the optimization, a specialized many-objective optimization algorithm based on decomposition is proposed. Since ensemble classifier learning can be regarded as an NP-hard problem, the proposed optimization algorithm, instead identifies the optimal ensemble using a divide-and-conquer rule-based chromosome encoding. Moreover, with the involvement of individual class accuracy in the objectives, the performance does not get biased towards any majority class. The proposed framework is experimented with 24 benchmark datasets obtained from the UCI machine learning repository and compared with the existing approaches. The experimental results show better classification accuracy with the proposed framework in comparison with the recent ensemble classifiers

    Development of algorithms to solve different key challenges facing design optimization

    Full text link
    Optimization methods play an indispensable role in today’s competitive environmentand there are plenty of practical examples where such methods have been used toidentify better performing designs (Boeing 787 Dreamliner and NASA ST5 antenna).Increasing complexity of the problems have also led to the development of sophisti-cated mathematical models that can only be solved using computationally expensivenumerical simulations such as finite element methods (FEM), computational fluidsdynamics (CFD), computational electromagnetics (CEM), etc. Repeated use of suchnumerical simulations is necessary in the context of optimization, i.e., to identifyoptimum products and processes with outstanding performance features. In reality,such problems often involve a large number of constraints and often demand multipleperformance considerations.Over the decades, population based metaheuristics have proven to be efficient, robustand versatile methods for numerical optimization as they are more amenable to dealwith such black-box problems. The major downside of any of these population basedmetaheuristics is their extremely long run time. Therefore, it is no surprise that thedevelopment of fast and efficient metaheuristics is an actively pursued research area.In this thesis, an effort is made to address three key challenges facing the adoption ofpopulation based metaheuristics for practical design optimization. The first challengerelates to the development of an efficient and reliable optimization algorithm capableof dealing with constrained optimization problems. In particular, two novel constrainthandling mechanisms are introduced i.e., one with the concept of partial evaluationusing constraint sequencing and the other involving adaptive constraint handling. Thestudy is motivated by the fundamental question should one evaluate all constraintsof a solution even if it has violated one constraint? and what is the difference inthe underlying search process if multiple constraint sequences are used?. The secondcontribution reported in this thesis relates to the development of an algorithm totackle optimization problems involving more than four objectives, i.e., many objectiveoptimizations. In this context, an algorithm based on decomposition is introducedwhich extends the capability of the well-known MOEA/D to deal with many objectiveoptimization problems. The algorithm incorporates a systematic sampling scheme and the balance between convergence and diversity during the course of search is maintained via a simple preemptive distance comparison scheme. The third contribution made inthis thesis is in the area of robust design optimization where the effects of variousformulations are studied in the framework of six-sigma quality. Four different problemformulations of robust design and methods to solve them have been proposed.The performance of these algorithms/schemes is rigorously assessed using well es-tablished benchmark functions and a suite of engineering design optimization problems.The results assessed using various measures clearly indicate that the proposed develop-ments offer competitive advantages over existing schemes.Finally, a summary of the findings of the work is presented. In addition, futureissues and directions which could be pursued with the aim of making the algorithmsmore efficient for handling various types of optimization problems are identified
    • …
    corecore