19,962 research outputs found
Self-adaptation of Genetic Operators Through Genetic Programming Techniques
Here we propose an evolutionary algorithm that self modifies its operators at
the same time that candidate solutions are evolved. This tackles convergence
and lack of diversity issues, leading to better solutions. Operators are
represented as trees and are evolved using genetic programming (GP) techniques.
The proposed approach is tested with real benchmark functions and an analysis
of operator evolution is provided.Comment: Presented in GECCO 201
A Study of Archiving Strategies in Multi-Objective PSO for Molecular Docking
Molecular docking is a complex optimization problem aimed at predicting the position of a ligand molecule in the active site of a receptor with the lowest binding energy. This problem can be formulated as a bi-objective optimization problem by minimizing the binding energy and the Root Mean Square Deviation (RMSD) difference in the coordinates of ligands. In this context, the SMPSO multi-objective swarm-intelligence algorithm has shown a remarkable performance. SMPSO is characterized by having an external archive used to store the non-dominated solutions and also as the basis of the leader selection strategy. In this paper, we analyze several SMPSO variants based on different archiving strategies in the scope of a benchmark of molecular docking instances. Our study reveals that the SMPSOhv, which uses an hypervolume contribution based archive, shows the overall best performance.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Recommended from our members
d-QPSO: A Quantum-Behaved Particle Swarm Technique for Finding D-Optimal Designs With Discrete and Continuous Factors and a Binary Response
Identifying optimal designs for generalized linear models with a binary response can be a challengingtask, especially when there are both discrete and continuous independent factors in the model. Theoreticalresults rarely exist for such models, and for the handful that do, they usually come with restrictive assumptions.In this article, we propose the d-QPSO algorithm, a modified version of quantum-behaved particleswarm optimization, to find a variety of D-optimal approximate and exact designs for experiments withdiscrete and continuous factors and a binary response. We show that the d-QPSO algorithm can efficientlyfind locally D-optimal designs even for experiments with a large number of factors and robust pseudo-Bayesian designs when nominal values for the model parameters are not available. Additionally, we investigaterobustness properties of the d-QPSO algorithm-generated designs to variousmodel assumptions andprovide real applications to design a bio-plastics odor removal experiment, an electronic static experiment,and a 10-factor car refueling experiment. Supplementary materials for the article are available online
The SOS Platform: Designing, Tuning and Statistically Benchmarking Optimisation Algorithms
open access articleWe present Stochastic Optimisation Software (SOS), a Java platform facilitating the algorithmic design process and the evaluation of metaheuristic optimisation algorithms. SOS reduces the burden of coding miscellaneous methods for dealing with several bothersome and time-demanding tasks such as parameter tuning, implementation of comparison algorithms and testbed problems, collecting and processing data to display results, measuring algorithmic overhead, etc. SOS provides numerous off-the-shelf methods including: (1) customised implementations of statistical tests, such as the Wilcoxon rank-sum test and the Holm–Bonferroni procedure, for comparing the performances of optimisation algorithms and automatically generating result tables in PDF and formats; (2) the implementation of an original advanced statistical routine for accurately comparing couples of stochastic optimisation algorithms; (3) the implementation of a novel testbed suite for continuous optimisation, derived from the IEEE CEC 2014 benchmark, allowing for controlled activation of the rotation on each testbed function. Moreover, we briefly comment on the current state of the literature in stochastic optimisation and highlight similarities shared by modern metaheuristics inspired by nature. We argue that the vast majority of these algorithms are simply a reformulation of the same methods and that metaheuristics for optimisation should be simply treated as stochastic processes with less emphasis on the inspiring metaphor behind them
- …