3,408 research outputs found
Multiple source transfer learning for dynamic multiobjective optimization
Recently, dynamic multiobjective evolutionary algorithms (DMOEAs) with transfer learning have become popular for solving dynamic multiobjective optimization problems (DMOPs), as the used transfer learning methods in DMOEAs can effectively generate a good initial population for the new environment. However, most of them only transfer non-dominated solutions from the previous one or two environments, which cannot fully exploit all historical information and may easily induce negative transfer as only limited knowledge is available. To address this problem, this paper presents a multiple source transfer learning method for DMOEA, called MSTL-DMOEA, which runs two transfer learning procedures to fully exploit the historical information from all previous environments. First, to select some representative solutions for knowledge transfer, one clustering-based manifold transfer learning is run to cluster non-dominated solutions of the last environment to obtain their centroids, which are then fed into the manifold transfer learning model to predict the corresponding centroids for the new environment. After that, multiple source transfer learning is further run by using multisource TrAdaboost, which can fully exploit information from the above centroids in new environment and old centroids from all previous environments, aiming to construct a more accurate prediction model. This way, MSTL-DMOEA can predict an initial population with better quality for the new environment. The experimental results also validate the superiority of MSTL-DMOEA over several competitive state-of-the-art DMOEAs in solving various kinds of DMOPs
Evolutionary Dynamic Multi-Objective Optimisation : A survey
Peer reviewedPostprin
Multiobjective genetic algorithm strategies for electricity production from generation IV nuclear technology
Development of a technico-economic optimization strategy of cogeneration systems of electricity/hydrogen, consists in finding an optimal efficiency of the generating cycle and heat delivery system, maximizing the energy production and minimizing the production costs. The first part of the paper is related to the development of a multiobjective optimization library (MULTIGEN) to tackle all types of problems arising from cogeneration. After a literature review for identifying the most efficient methods, the MULTIGEN library is described, and the innovative points are listed. A new stopping criterion, based on the stagnation of the Pareto front, may lead to significant decrease of computational times, particularly in the case of problems involving only integer variables. Two practical examples are presented in the last section. The former is devoted to a bicriteria optimization of both exergy destruction and total cost of the plant, for a generating cycle coupled with a Very High Temperature Reactor (VHTR). The second example consists in designing the heat exchanger of the generating turbomachine. Three criteria are optimized: the exchange surface, the exergy destruction and the number of exchange modules
Bat Algorithm: Literature Review and Applications
Bat algorithm (BA) is a bio-inspired algorithm developed by Yang in 2010 and
BA has been found to be very efficient. As a result, the literature has
expanded significantly in the last 3 years. This paper provides a timely review
of the bat algorithm and its new variants. A wide range of diverse applications
and case studies are also reviewed and summarized briefly here. Further
research topics are also discussed.Comment: 10 page
Large Language Model for Multi-objective Evolutionary Optimization
Multiobjective evolutionary algorithms (MOEAs) are major methods for solving
multiobjective optimization problems (MOPs). Many MOEAs have been proposed in
the past decades, of which the search operators need a carefully handcrafted
design with domain knowledge. Recently, some attempts have been made to replace
the manually designed operators in MOEAs with learning-based operators (e.g.,
neural network models). However, much effort is still required for designing
and training such models, and the learned operators might not generalize well
on new problems. To tackle the above challenges, this work investigates a novel
approach that leverages the powerful large language model (LLM) to design MOEA
operators. With proper prompt engineering, we successfully let a general LLM
serve as a black-box search operator for decomposition-based MOEA (MOEA/D) in a
zero-shot manner. In addition, by learning from the LLM behavior, we further
design an explicit white-box operator with randomness and propose a new version
of decomposition-based MOEA, termed MOEA/D-LO. Experimental studies on
different test benchmarks show that our proposed method can achieve competitive
performance with widely used MOEAs. It is also promising to see the operator
only learned from a few instances can have robust generalization performance on
unseen problems with quite different patterns and settings. The results reveal
the potential benefits of using pre-trained LLMs in the design of MOEAs
- âŠ