6 research outputs found

    Comparison of a novel dominance-based differential evolution method with the state-of-the-art methods for solving multi-objective real-valued optimization problems

    Get PDF
    Differential Evolution algorithm (DE) is a well-known nature-inspired method in evolutionary computations scope. This paper adds some new features to DE algorithm and proposes a novel method focusing on ranking technique. The proposed method is named as Dominance-Based Differential Evolution, called DBDE from this point on, which is the improved version of the standard DE algorithm. The suggested DBDE applies some changes on the selection operator of the Differential Evolution (DE) algorithm and modifies the crossover and initialization phases to improve the performance of DE. The dominance ranks are used in the selection phase of DBDE to be capable of selecting higher quality solutions. A dominance-rank for solution X is the number of solutions dominating X. Moreover, some vectors called target vectors are used through the selection process. Effectiveness and performance of the proposed DBDE method is experimentally evaluated using six well-known benchmarks, provided by CEC2009, plus two additional test problems namely Kursawe and Fonseca & Fleming. The evaluation process emphasizes on specific bi-objective real-valued optimization problems reported in literature. Likewise, the Inverted Generational Distance (IGD) metric is calculated for the obtained results to measure the performance of algorithms. To follow up the evaluation rules obeyed by all state-of-the-art methods, the fitness evaluation function is called 300.000 times and 30 independent runs of DBDE is carried out. Analysis of the obtained results indicates that the performance of the proposed algorithm (DBDE) in terms of convergence and robustness outperforms the majority of state-of-the-art methods reported in the literatur

    A convergence and diversity guided leader selection strategy for many-objective particle swarm optimization

    Get PDF
    Recently, particle swarm optimizer (PSO) is extended to solve many-objective optimization problems (MaOPs) and becomes a hot research topic in the field of evolutionary computation. Particularly, the leader particle selection (LPS) and the search direction used in a velocity update strategy are two crucial factors in PSOs. However, the LPS strategies for most existing PSOs are not so efficient in high-dimensional objective space, mainly due to the lack of convergence pressure or loss of diversity. In order to address these two issues and improve the performance of PSO in high-dimensional objective space, this paper proposes a convergence and diversity guided leader selection strategy for PSO, denoted as CDLS, in which different leader particles are adaptively selected for each particle based on its corresponding situation of convergence and diversity. In this way, a good tradeoff between the convergence and diversity can be achieved by CDLS. To verify the effectiveness of CDLS, it is embedded into the PSO search process of three well-known PSOs. Furthermore, a new variant of PSO combining with the CDLS strategy, namely PSO/CDLS, is also presented. The experimental results validate the superiority of our proposed CDLS strategy and the effectiveness of PSO/CDLS, when solving numerous MaOPs with regular and irregular Pareto fronts (PFs)

    Multi-Objective Feature Selection With Missing Data in Classification

    Get PDF
    Feature selection (FS) is an important research topic in machine learning. Usually, FS is modelled as a bi-objective optimization problem whose objectives are: 1) classification accuracy; 2) number of features. One of the main issues in real-world applications is missing data. Databases with missing data are likely to be unreliable. Thus, FS performed on a data set missing some data is also unreliable. In order to directly control this issue plaguing the field, we propose in this study a novel modelling of FS: we include reliability as the third objective of the problem. In order to address the modified problem, we propose the application of the non-dominated sorting genetic algorithm-III (NSGA-III). We selected six incomplete data sets from the University of California Irvine (UCI) machine learning repository. We used the mean imputation method to deal with the missing data. In the experiments, k-nearest neighbors (K-NN) is used as the classifier to evaluate the feature subsets. Experimental results show that the proposed three-objective model coupled with NSGA-III efficiently addresses the FS problem for the six data sets included in this study

    A localized decomposition evolutionary algorithm for imbalanced multi-objective optimization

    Get PDF
    Multi-objective evolutionary algorithms based on decomposition (MOEA/Ds) convert a multi-objective optimization problem (MOP) into a set of scalar subproblems, which are then optimized in a collaborative manner. However, when tackling imbalanced MOPs, the performance of most MOEA/Ds will evidently deteriorate, as a few solutions will replace most of the others in the evolutionary process, resulting in a significant loss of diversity. To address this issue, this paper suggests a localized decomposition evolutionary algorithm (LDEA) for imbalanced MOPs. A localized decomposition method is proposed to assign a local region for each subproblem, where the inside solutions are associated and the solution update is restricted inside (i.e., solutions are only replaced by offspring within the same local region). Once off-spring are generated within an originally empty region, the best one is reserved for this subproblem to extend diversity. Meanwhile, the subproblem with the largest number of associated solutions will be found and one of its associated solutions with the worst aggregated value will be removed. Moreover, to speed up convergence for each subproblem while balancing the population's diversity, LDEA only evolves the best-associated solution in each subproblem and correspondingly tailors two decomposition methods in the environmental selection. When compared to nine competitive MOEAs, LDEA has shown the advantages in tackling two benchmark sets of imbalanced MOPs, one benchmark set of balanced yet complicated MOPs, and one real-world MOP

    A fuzzy decision variables framework for large-scale multiobjective optimization

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.In large-scale multiobjective optimization, too many decision variables hinder the convergence search of evolutionary algorithms. Reducing the search range of the decision space will significantly alleviate this puzzle. With this in mind, this paper proposes a fuzzy decision variables framework for largescale multiobjective optimization. The framework divides the entire evolutionary process into two main stages: fuzzy evolution and precise evolution. In fuzzy evolution, we blur the decision variables of the original solution to reduce the search range of the evolutionary algorithm in the decision space so that the evolutionary population can quickly converge. The degree of fuzzification gradually decreases with the evolutionary process. Once the population approximately converges, the framework will turn to precise evolution. In precise evolution, the actual decision variables of the solution are directly optimized to increase the diversity of the population so as to be closer to the true Pareto optimal front. Finally, this paper embeds some representative algorithms into the proposed framework and verifies the framework’s effectiveness through comparative experiments on various large-scale multiobjective problems with 500 to 5000 decision variables. Experimental results show that in large-scale multiobjective optimization, the framework proposed in this paper can significantly improve the performance and computational efficiency of multiobjective optimization algorithms
    corecore