3,621 research outputs found

    Multi-objective ant colony optimization for the twin-screw configuration problem

    Get PDF
    The Twin-Screw Configuration Problem (TSCP) consists in identifying the best location of a set of available screw elements along a screw shaft. Due to its combinatorial nature, it can be seen as a sequencing problem. In addition, different conflicting objectives may have to be considered when defining a screw configuration and, thus, it is usually tackled as a multi-objective optimization problem. In this research, a multi-objective ant colony optimization (MOACO) algorithm was adapted to deal with the TSCP. The influence of different parameters of the MOACO algorithm was studied and its performance was compared with that of a previously proposed multi-objective evolutionary algorithm and a two-phase local search algorithm. The experimental results showed that MOACO algorithms have a significant potential for solving the TSCP.This work has been supported by the Portuguese Fundacao para a Ciencia e Tecnologia under PhD grant SFRH/BD/21921/2005. Thomas Stutzle acknowledges support of the Belgian F.R.S-FNRS of which he is a research associate, the E-SWARM project, funded by an ERC Advanced Grant, and by the Meta-X project, funded by the Scientific Research Directorate of the French Community of Belgium

    Discovering Evolutionary Stepping Stones through Behavior Domination

    Full text link
    Behavior domination is proposed as a tool for understanding and harnessing the power of evolutionary systems to discover and exploit useful stepping stones. Novelty search has shown promise in overcoming deception by collecting diverse stepping stones, and several algorithms have been proposed that combine novelty with a more traditional fitness measure to refocus search and help novelty search scale to more complex domains. However, combinations of novelty and fitness do not necessarily preserve the stepping stone discovery that novelty search affords. In several existing methods, competition between solutions can lead to an unintended loss of diversity. Behavior domination defines a class of algorithms that avoid this problem, while inheriting theoretical guarantees from multiobjective optimization. Several existing algorithms are shown to be in this class, and a new algorithm is introduced based on fast non-dominated sorting. Experimental results show that this algorithm outperforms existing approaches in domains that contain useful stepping stones, and its advantage is sustained with scale. The conclusion is that behavior domination can help illuminate the complex dynamics of behavior-driven search, and can thus lead to the design of more scalable and robust algorithms.Comment: To Appear in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2017

    A survey of QoS-aware web service composition techniques

    Get PDF
    Web service composition can be briefly described as the process of aggregating services with disparate functionalities into a new composite service in order to meet increasingly complex needs of users. Service composition process has been accurate on dealing with services having disparate functionalities, however, over the years the number of web services in particular that exhibit similar functionalities and varying Quality of Service (QoS) has significantly increased. As such, the problem becomes how to select appropriate web services such that the QoS of the resulting composite service is maximized or, in some cases, minimized. This constitutes an NP-hard problem as it is complicated and difficult to solve. In this paper, a discussion of concepts of web service composition and a holistic review of current service composition techniques proposed in literature is presented. Our review spans several publications in the field that can serve as a road map for future research

    Quality Measures of Parameter Tuning for Aggregated Multi-Objective Temporal Planning

    Get PDF
    Parameter tuning is recognized today as a crucial ingredient when tackling an optimization problem. Several meta-optimization methods have been proposed to find the best parameter set for a given optimization algorithm and (set of) problem instances. When the objective of the optimization is some scalar quality of the solution given by the target algorithm, this quality is also used as the basis for the quality of parameter sets. But in the case of multi-objective optimization by aggregation, the set of solutions is given by several single-objective runs with different weights on the objectives, and it turns out that the hypervolume of the final population of each single-objective run might be a better indicator of the global performance of the aggregation method than the best fitness in its population. This paper discusses this issue on a case study in multi-objective temporal planning using the evolutionary planner DaE-YAHSP and the meta-optimizer ParamILS. The results clearly show how ParamILS makes a difference between both approaches, and demonstrate that indeed, in this context, using the hypervolume indicator as ParamILS target is the best choice. Other issues pertaining to parameter tuning in the proposed context are also discussed.Comment: arXiv admin note: substantial text overlap with arXiv:1305.116

    A convergence and diversity guided leader selection strategy for many-objective particle swarm optimization

    Get PDF
    Recently, particle swarm optimizer (PSO) is extended to solve many-objective optimization problems (MaOPs) and becomes a hot research topic in the field of evolutionary computation. Particularly, the leader particle selection (LPS) and the search direction used in a velocity update strategy are two crucial factors in PSOs. However, the LPS strategies for most existing PSOs are not so efficient in high-dimensional objective space, mainly due to the lack of convergence pressure or loss of diversity. In order to address these two issues and improve the performance of PSO in high-dimensional objective space, this paper proposes a convergence and diversity guided leader selection strategy for PSO, denoted as CDLS, in which different leader particles are adaptively selected for each particle based on its corresponding situation of convergence and diversity. In this way, a good tradeoff between the convergence and diversity can be achieved by CDLS. To verify the effectiveness of CDLS, it is embedded into the PSO search process of three well-known PSOs. Furthermore, a new variant of PSO combining with the CDLS strategy, namely PSO/CDLS, is also presented. The experimental results validate the superiority of our proposed CDLS strategy and the effectiveness of PSO/CDLS, when solving numerous MaOPs with regular and irregular Pareto fronts (PFs)

    Surrogate-assisted multiobjective optimization based on decomposition

    Get PDF
    International audienceA number of surrogate-assisted evolutionary algorithms are being developed for tackling expensive multiobjective optimization problems. On the one hand, a relatively broad range of techniques from both machine learning and multiobjective optimization can be combined for this purpose. Diferent taxonomies exist in order to better delimit the design choices, advantages and drawbacks of existing approaches. On the other hand, assessing the relative performance of a given approach is a diicult task, since it depends on the characteristics of the problem at hand. In this paper, we focus on surrogate-assisted approaches using objective space decomposition as a core component. We propose a reined and ine-grained classiication, ranging from EGO-like approaches to iltering or pre-screening. More importantly, we provide a comprehensive comparative study of a representative selection of state-of-the-art methods , together with simple baseline algorithms. We rely on selected benchmark functions taken from the bbob-biobj benchmarking test suite, that provides a variable range of objective function diiculties. Our empirical analysis highlights the efect of the available budget on the relative performance of each approach, and the impact of the training set and of the machine learning model construction on both solution quality and runtime eiciency

    Large Language Model for Multi-objective Evolutionary Optimization

    Full text link
    Multiobjective evolutionary algorithms (MOEAs) are major methods for solving multiobjective optimization problems (MOPs). Many MOEAs have been proposed in the past decades, of which the search operators need a carefully handcrafted design with domain knowledge. Recently, some attempts have been made to replace the manually designed operators in MOEAs with learning-based operators (e.g., neural network models). However, much effort is still required for designing and training such models, and the learned operators might not generalize well on new problems. To tackle the above challenges, this work investigates a novel approach that leverages the powerful large language model (LLM) to design MOEA operators. With proper prompt engineering, we successfully let a general LLM serve as a black-box search operator for decomposition-based MOEA (MOEA/D) in a zero-shot manner. In addition, by learning from the LLM behavior, we further design an explicit white-box operator with randomness and propose a new version of decomposition-based MOEA, termed MOEA/D-LO. Experimental studies on different test benchmarks show that our proposed method can achieve competitive performance with widely used MOEAs. It is also promising to see the operator only learned from a few instances can have robust generalization performance on unseen problems with quite different patterns and settings. The results reveal the potential benefits of using pre-trained LLMs in the design of MOEAs

    On the design of an ECOC-compliant genetic algorithm

    Get PDF
    Genetic Algorithms (GA) have been previously applied to Error-Correcting Output Codes (ECOC) in state-of-the-art works in order to find a suitable coding matrix. Nevertheless, none of the presented techniques directly take into account the properties of the ECOC matrix. As a result the considered search space is unnecessarily large. In this paper, a novel Genetic strategy to optimize the ECOC coding step is presented. This novel strategy redefines the usual crossover and mutation operators in order to take into account the theoretical properties of the ECOC framework. Thus, it reduces the search space and lets the algorithm to converge faster. In addition, a novel operator that is able to enlarge the code in a smart way is introduced. The novel methodology is tested on several UCI datasets and four challenging computer vision problems. Furthermore, the analysis of the results done in terms of performance, code length and number of Support Vectors shows that the optimization process is able to find very efficient codes, in terms of the trade-off between classification performance and the number of classifiers. Finally, classification performance per dichotomizer results shows that the novel proposal is able to obtain similar or even better results while defining a more compact number of dichotomies and SVs compared to state-of-the-art approaches
    corecore