137 research outputs found

    A Study of Archiving Strategies in Multi-Objective PSO for Molecular Docking

    Get PDF
    Molecular docking is a complex optimization problem aimed at predicting the position of a ligand molecule in the active site of a receptor with the lowest binding energy. This problem can be formulated as a bi-objective optimization problem by minimizing the binding energy and the Root Mean Square Deviation (RMSD) difference in the coordinates of ligands. In this context, the SMPSO multi-objective swarm-intelligence algorithm has shown a remarkable performance. SMPSO is characterized by having an external archive used to store the non-dominated solutions and also as the basis of the leader selection strategy. In this paper, we analyze several SMPSO variants based on different archiving strategies in the scope of a benchmark of molecular docking instances. Our study reveals that the SMPSOhv, which uses an hypervolume contribution based archive, shows the overall best performance.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Enhanced Harris's Hawk algorithm for continuous multi-objective optimization problems

    Get PDF
    Multi-objective swarm intelligence-based (MOSI-based) metaheuristics were proposed to solve multi-objective optimization problems (MOPs) with conflicting objectives. Harris’s hawk multi-objective optimizer (HHMO) algorithm is a MOSIbased algorithm that was developed based on the reference point approach. The reference point is determined by the decision maker to guide the search process to a particular region in the true Pareto front. However, HHMO algorithm produces a poor approximation to the Pareto front because lack of information sharing in its population update strategy, equal division of convergence parameter and randomly generated initial population. A two-step enhanced non-dominated sorting HHMO (2SENDSHHMO) algorithm has been proposed to solve this problem. The algorithm includes (i) a population update strategy which improves the movement of hawks in the search space, (ii) a parameter adjusting strategy to control the transition between exploration and exploitation, and (iii) a population generating method in producing the initial candidate solutions. The population update strategy calculates a new position of hawks based on the flush-and-ambush technique of Harris’s hawks, and selects the best hawks based on the non-dominated sorting approach. The adjustment strategy enables the parameter to adaptively changed based on the state of the search space. The initial population is produced by generating quasi-random numbers using Rsequence followed by adapting the partial opposition-based learning concept to improve the diversity of the worst half in the population of hawks. The performance of the 2S-ENDSHHMO has been evaluated using 12 MOPs and three engineering MOPs. The obtained results were compared with the results of eight state-of-the-art multi-objective optimization algorithms. The 2S-ENDSHHMO algorithm was able to generate non-dominated solutions with greater convergence and diversity in solving most MOPs and showed a great ability in jumping out of local optima. This indicates the capability of the algorithm in exploring the search space. The 2S-ENDSHHMO algorithm can be used to improve the search process of other MOSI-based algorithms and can be applied to solve MOPs in applications such as structural design and signal processing

    Multiobjective particle swarm optimization: Integration of dynamic population and multiple-swarm concepts and constraint handling

    Get PDF
    Scope and Method of Study: Over the years, most multiobjective particle swarm optimization (MOPSO) algorithms are developed to effectively and efficiently solve unconstrained multiobjective optimization problems (MOPs). However, in the real world application, many optimization problems involve a set of constraints (functions). In this study, the first research goal is to develop state-of-the-art MOPSOs that incorporated the dynamic population size and multipleswarm concepts to exploit possible improvement in efficiency and performance of existing MOPSOs in solving the unconstrained MOPs. The proposed MOPSOs are designed in two different perspectives: 1) dynamic population size of multiple-swarm MOPSO (DMOPSO) integrates the dynamic swarm population size with a fixed number of swarms and other strategies to support the concepts; and 2) dynamic multiple swarms in multiobjective particle swarm optimization (DSMOPSO), dynamic swarm strategy is incorporated wherein the number of swarms with a fixed swarm size is dynamically adjusted during the search process. The second research goal is to develop a MOPSO with design elements that utilize the PSO's key mechanisms to effectively solve for constrained multiobjective optimization problems (CMOPs).Findings and Conclusions: DMOPSO shows competitive to selected MOPSOs in producing well approximated Pareto front with improved diversity and convergence, as well as able to contribute reduced computational cost while DSMOPSO shows competitive results in producing well extended, uniformly distributed, and near optimum Pareto fronts, with reduced computational cost for some selected benchmark functions. Sensitivity analysis is conducted to study the impact of the tuning parameters on the performance of DSMOPSO and to provide recommendation on parameter settings. For the proposed constrained MOPSO, simulation results indicate that it is highly competitive in solving the constrained benchmark problems

    Multi-objective optimization with a Gaussian PSO algorithm

    Get PDF
    Particle Swarm Optimization es una heurística popular usada para resolver adecuada y efectivamente problemas mono-objetivo. En este artículo, presentamos una primera adaptación de esta heurística para tratar problemas multi-objetivo sin restricciones. La propuesta (llamada G-MOPSO) incorpora una actualización Gaussiana, dominancia Pareto, una política elitista, un archivo externo y un shake-mecanismo para mantener la diversidad. Para validar nuestro algoritmo, usamos cuatro funciones de prueba bien conocidas, con diferentes características. Los resultados preliminares son comparados con los valores obtenidos por un algoritmo evolutivo multi-objetivo representativo del estado del arte en el área: NSGA-II. También comparamos los resultados con los obtenidos por OMOPSO, un algoritmo multi-objetivo basado en la heurística PSO. La performance de nuestra propuesta es comparable con la de NSGA-II y supera a la de OMOPSOParticle Swarm Optimization is a popular heuristic used to solve suitably and effectively mono-objective problems. In this paper, we present an adaptation of this heuristic to treat unconstrained multi-objective problems. The proposed approach (called G-MOPSO) incorporates a Gaussian update of individuals, Pareto dominance, an elitist policy, and a shake-mechanism to maintain diversity. In order to validate our algorithm, we use four well-known test functions with different characteristics. Preliminary results are compared with respect to those obtained by a multi-objective evolutionary algorithm representative of the state-of-the-art: NSGA-II. We also compare the results with those obtained by OMOPSO, a multi-objective PSO based algorithm. The performance of our approach is comparable with the NSGA-II and outperforms the OMOPSO.Workshop de Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    A novel hybrid multi-objective metamodel-based evolutionary optimization algorithm

    Get PDF
    Optimization via Simulation (OvS) is an useful optimization tool to find a solution to an optimization problem that is difficult to model analytically. OvS consists in evaluating potential solutions through simulation executions; however, its high computational cost is a factor that can make its implementation infeasible. This issue also occurs in multi-objective problems, which tend to be expensive to solve. In this work, we present a new hybrid multi-objective OvS algorithm, which uses Kriging-type metamodels to estimate the simulations results and a multi-objective evolutionary algorithm to manage the optimization process. Our proposal succeeds in reducing the computational cost significantly without affecting the quality of the results obtained. The evolutionary part of the hybrid algorithm is based on the popular NSGA-II. The hybrid method is compared to the canonical NSGA-II and other hybrid approaches, showing a good performance not only in the quality of the solutions but also as computational cost saving.Fil: Baquela, Enrique Gabriel. Universidad Tecnológica Nacional. Facultad Regional San Nicolás; ArgentinaFil: Olivera, Ana Carolina. Universidad Nacional de Cuyo. Instituto para las Tecnologías de la Informacion y las Comunicaciones; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin

    Rotationally invariant techniques for handling parameter interactions in evolutionary multi-objective optimization

    Get PDF
    In traditional optimization approaches the interaction of parameters associated with a problem is not a significant issue, but in the domain of Evolutionary Multi-Objective Optimization (EMOO) traditional genetic algorithm approaches have difficulties in optimizing problems with parameter interactions. Parameter interactions can be introduced when the search space is rotated. Genetic algorithms are referred to as being not rotationally invariant because their behavior changes depending on the orientation of the search space. Many empirical studies in single and multi-objective evolutionary optimization are done with respect to test problems which do not have parameter interactions. Such studies provide a favorably biased indication of genetic algorithm performance. This motivates the first aspect of our work; the improvement of the testing of EMOO algorithms with respect to the aforementioned difficulties that genetic algorithms experience in the presence of parameter interactions. To this end, we examine how EMOO algorithms can be assessed when problems are subject to an arbitrarily uniform degree of parameter interactions. We establish a theoretical basis for parameter interactions and how they can be measured. Furthermore, we ask the question of what difficulties a multi-objective genetic algorithm experiences on optimization problems exhibiting parameter interactions. We also ask how these difficulties can be overcome in order to efficiently find the Pareto-optimal front on such problems. Existing multi-objective test problems in the literature typically introduce parameter interactions by altering the fitness landscape, which is undesirable. We propose a new suite of test problems that exhibit parameter interactions through a rotation of the decision space, without altering the fitness landscape. In addition, we compare the performance of a number of recombination operators on these test problems. The second aspect of this work is concerned with developing an efficient multi-objective optimization algorithm which works well on problems with parameter interactions. We investigate how an evolutionary algorithm can be made more efficient on multi-objective problems with parameter interactions by developing four novel rotationally invariant differential evolution approaches. We also ask whether the proposed approaches are competitive in comparison with a state-of-the-art EMOO algorithm. We propose several differential evolution approaches incorporating directional information from the multi-objective search space in order to accelerate and direct the search. Experimental results indicate that dramatic improvements in efficiency can be achieved by directing the search towards points which are more dominant and more diverse. We also address the important issue of diversity loss in rotationally invariant vector-wise differential evolution. Being able to generate diverse solutions is critically important in order to avoid stagnation. In order to address this issue, one of the directed approaches that we examine incorporates a novel sampling scheme around better individuals in the search space. This variant is able to perform exceptionally well on the test problems with much less computational cost and scales to very high decision space dimensions even in the presence of parameter interactions

    On the automatic design of multi‑objective particle swarm optimizers: experimentation and analysis.

    Get PDF
    Research in multi-objective particle swarm optimizers (MOPSOs) progresses by proposing one new MOPSO at a time. In spite of the commonalities among different MOPSOs, it is often unclear which algorithmic components are crucial for explaining the performance of a particular MOPSO design. Moreover, it is expected that different designs may perform best on different problem families and identifying a best overall MOPSO is a challenging task. We tackle this challenge here by: (1) proposing AutoMOPSO, a flexible algorithmic template for designing MOPSOs with a design space that can instantiate thousands of potential MOPSOs; and (2) searching for good-performing MOPSO designs given a family of training problems by means of an automatic configuration tool (irace). We apply this automatic design methodology to generate a MOPSO that significantly outperforms two state-of-the-art MOPSOs on four well-known bi-objective problem families. We also identify the key design choices and parameters of the winning MOPSO by means of ablation. FAutoMOPSO is publicly available as part of the jMetal framework.Funding for open access charge: Universidad de Málaga / CBU
    • …
    corecore