191 research outputs found

    Stochastic Fractal Based Multiobjective Fruit Fly Optimization

    Get PDF
    The fruit fly optimization algorithm (FOA) is a global optimization algorithm inspired by the foraging behavior of a fruit fly swarm. In this study, a novel stochastic fractal model based fruit fly optimization algorithm is proposed for multiobjective optimization. A food source generating method based on a stochastic fractal with an adaptive parameter updating strategy is introduced to improve the convergence performance of the fruit fly optimization algorithm. To deal with multiobjective optimization problems, the Pareto domination concept is integrated into the selection process of fruit fly optimization and a novel multiobjective fruit fly optimization algorithm is then developed. Similarly to most of other multiobjective evolutionary algorithms (MOEAs), an external elitist archive is utilized to preserve the nondominated solutions found so far during the evolution, and a normalized nearest neighbor distance based density estimation strategy is adopted to keep the diversity of the external elitist archive. Eighteen benchmarks are used to test the performance of the stochastic fractal based multiobjective fruit fly optimization algorithm (SFMOFOA). Numerical results show that the SFMOFOA is able to well converge to the Pareto fronts of the test benchmarks with good distributions. Compared with four state-of-the-art methods, namely, the non-dominated sorting generic algorithm (NSGA-II), the strength Pareto evolutionary algorithm (SPEA2), multi-objective particle swarm optimization (MOPSO), and multiobjective self-adaptive differential evolution (MOSADE), the proposed SFMOFOA has better or competitive multiobjective optimization performance

    Stochastic Fractal Based Multiobjective Fruit Fly Optimization

    Get PDF
    The fruit fly optimization algorithm (FOA) is a global optimization algorithm inspired by the foraging behavior of a fruit fly swarm. In this study, a novel stochastic fractal model based fruit fly optimization algorithm is proposed for multiobjective optimization. A food source generating method based on a stochastic fractal with an adaptive parameter updating strategy is introduced to improve the convergence performance of the fruit fly optimization algorithm. To deal with multiobjective optimization problems, the Pareto domination concept is integrated into the selection process of fruit fly optimization and a novel multiobjective fruit fly optimization algorithm is then developed. Similarly to most of other multiobjective evolutionary algorithms (MOEAs), an external elitist archive is utilized to preserve the nondominated solutions found so far during the evolution, and a normalized nearest neighbor distance based density estimation strategy is adopted to keep the diversity of the external elitist archive. Eighteen benchmarks are used to test the performance of the stochastic fractal based multiobjective fruit fly optimization algorithm (SFMOFOA). Numerical results show that the SFMOFOA is able to well converge to the Pareto fronts of the test benchmarks with good distributions. Compared with four state-of-the-art methods, namely, the non-dominated sorting generic algorithm (NSGA-II), the strength Pareto evolutionary algorithm (SPEA2), multi-objective particle swarm optimization (MOPSO), and multiobjective self-adaptive differential evolution (MOSADE), the proposed SFMOFOA has better or competitive multiobjective optimization performance

    A self-organizing weighted optimization based framework for large-scale multi-objective optimization

    Get PDF
    The solving of large-scale multi-objective optimization problem (LSMOP) has become a hot research topic in evolutionary computation. To better solve this problem, this paper proposes a self-organizing weighted optimization based framework, denoted S-WOF, for addressing LSMOPs. Compared to the original framework, there are two main improvements in our work. Firstly, S-WOF simplifies the evolutionary stage into one stage, in which the evaluating numbers of weighted based optimization and normal optimization approaches are adaptively adjusted based on the current evolutionary state. Specifically, regarding the evaluating number for weighted based optimization (i.e., t1), it is larger when the population is in the exploitation state, which aims to accelerate the convergence speed, while t1 is diminishing when the population is switching to the exploration state, in which more attentions are put on the diversity maintenance. On the other hand, regarding the evaluating number for original optimization (i.e., t2), which shows an opposite trend to t1, it is small during the exploitation stage but gradually increases later. In this way, a dynamic trade-off between convergence and diversity is achieved in S-WOF. Secondly, to further improve the search ability in the large-scale decision space, an efficient competitive swarm optimizer (CSO) is implemented in S-WOF, which shows efficiency for solving LSMOPs. Finally, the experimental results have validated the superiority of S-WOF over several state-of-the-art large-scale evolutionary algorithms

    VSD-MOEA: A Dominance-Based Multiobjective Evolutionary Algorithm with Explicit Variable Space Diversity Management

    Get PDF
    Most state-of-the-art Multiobjective Evolutionary Algorithms (moeas) promote the preservation of diversity of objective function space but neglect the diversity of decision variable space. The aim of this article is to show that explicitly managing the amount of diversity maintained in the decision variable space is useful to increase the quality of moeas when taking into account metrics of the objective space. Our novel Variable Space Diversity-based MOEA (vsd-moea) explicitly considers the diversity of both decision variable and objective function space. This information is used with the aim of properly adapting the balance between exploration and intensification during the optimization process. Particularly, at the initial stages, decisions made by the approach are more biased by the information on the diversity of the variable space, whereas it gradually grants more importance to the diversity of objective function space as the evolution progresses. The latter is achieved through a novel density estimator. The new method is compared with state-of-art moeas using several benchmarks with two and three objectives. This novel proposal yields much better results than state-of-the-art schemes when considering metrics applied on objective function space, exhibiting a more stable and robust behavior

    Evolutionary Algorithms for Static and Dynamic Multiobjective Optimization

    Get PDF
    Many real-world optimization problems consist of a number of conflicting objectives that have to be optimized simultaneously. Due to the presence of multiple conflicting ob- jectives, there is no single solution that can optimize all the objectives. Therefore, the resulting multiobjective optimization problems (MOPs) resort to a set of trade-off op- timal solutions, called the Pareto set in the decision space and the Pareto front in the objective space. Traditional optimization methods can at best find one solution in a sin- gle run, thereby making them inefficient to solve MOPs. In contrast, evolutionary algo- rithms (EAs) are able to approximate multiple optimal solutions in a single run. This strength makes EAs good candidates for solving MOPs. Over the past several decades, there have been increasing research interests in developing EAs or improving their perfor- mance, resulting in a large number of contributions towards the applicability of EAs for MOPs. However, the performance of EAs depends largely on the properties of the MOPs in question, e.g., static/dynamic optimization environments, simple/complex Pareto front characteristics, and low/high dimensionality. Different problem properties may pose dis- tinct optimization difficulties to EAs. For example, dynamic (time-varying) MOPs are generally more challenging than static ones to EAs. Therefore, it is not trivial to further study EAs in order to make them widely applicable to MOPs with various optimization scenarios or problem properties. This thesis is devoted to exploring EAs’ ability to solve a variety of MOPs with dif- ferent problem characteristics, attempting to widen EAs’ applicability and enhance their general performance. To start with, decomposition-based EAs are enhanced by incorpo- rating two-phase search and niche-guided solution selection strategies so as to make them suitable for solving MOPs with complex Pareto fronts. Second, new scalarizing functions are proposed and their impacts on evolutionary multiobjective optimization are exten- sively studied. On the basis of the new scalarizing functions, an efficient decomposition- based EA is introduced to deal with a class of hard MOPs. Third, a diversity-first- and-convergence-second sorting method is suggested to handle possible drawbacks of convergence-first based sorting methods. The new sorting method is then combined with strength based fitness assignment, with the aid of reference directions, to optimize MOPs with an increase of objective dimensionality. After that, we study the field of dynamic multiobjective optimization where objective functions and constraints can change over time. A new set of test problems consisting of a wide range of dynamic characteristics is introduced at an attempt to standardize test environments in dynamic multiobjective optimization, thereby aiding fair algorithm comparison and deep performance analysis. Finally, a dynamic EA is developed to tackle dynamic MOPs by exploiting the advan- tages of both generational and steady-state algorithms. All the proposed approaches have been extensively examined against existing state-of-the-art methods, showing fairly good performance in a variety of test scenarios. The research work presented in the thesis is the output of initiative and novel attempts to tackle some challenging issues in evolutionary multiobjective optimization. This re- search has not only extended the applicability of some of the existing approaches, such as decomposition-based or Pareto-based algorithms, for complex or hard MOPs, but also contributed to moving forward research in the field of dynamic multiobjective optimiza- tion with novel ideas including new test suites and novel algorithm design

    An adaptation reference-point-based multiobjective evolutionary algorithm

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.It is well known that maintaining a good balance between convergence and diversity is crucial to the performance of multiobjective optimization algorithms (MOEAs). However, the Pareto front (PF) of multiobjective optimization problems (MOPs) affects the performance of MOEAs, especially reference point-based ones. This paper proposes a reference-point-based adaptive method to study the PF of MOPs according to the candidate solutions of the population. In addition, the proportion and angle function presented selects elites during environmental selection. Compared with five state-of-the-art MOEAs, the proposed algorithm shows highly competitive effectiveness on MOPs with six complex characteristics

    A competitive mechanism based multi-objective particle swarm optimizer with fast convergence

    Get PDF
    In the past two decades, multi-objective optimization has attracted increasing interests in the evolutionary computation community, and a variety of multi-objective optimization algorithms have been proposed on the basis of different population based meta-heuristics, where the family of multi-objective particle swarm optimization is among the most representative ones. While the performance of most existing multi-objective particle swarm optimization algorithms largely depends on the global or personal best particles stored in an external archive, in this paper, we propose a competitive mechanism based multi-objective particle swarm optimizer, where the particles are updated on the basis of the pairwise competitions performed in the current swarm at each generation. The performance of the proposed competitive multi-objective particle swarm optimizer is verified by benchmark comparisons with several state-of-the-art multiobjective optimizers, including three multi-objective particle swarm optimization algorithms and three multi-objective evolutionary algorithms. Experimental results demonstrate the promising performance of the proposed algorithm in terms of both optimization quality and convergence speed

    Multiobjective Simulation Optimization Using Enhanced Evolutionary Algorithm Approaches

    Get PDF
    In today\u27s competitive business environment, a firm\u27s ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or noisy ) values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, black-box objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms\u27 performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications
    • …
    corecore