288 research outputs found

    Large Language Model for Multi-objective Evolutionary Optimization

    Full text link
    Multiobjective evolutionary algorithms (MOEAs) are major methods for solving multiobjective optimization problems (MOPs). Many MOEAs have been proposed in the past decades, of which the search operators need a carefully handcrafted design with domain knowledge. Recently, some attempts have been made to replace the manually designed operators in MOEAs with learning-based operators (e.g., neural network models). However, much effort is still required for designing and training such models, and the learned operators might not generalize well on new problems. To tackle the above challenges, this work investigates a novel approach that leverages the powerful large language model (LLM) to design MOEA operators. With proper prompt engineering, we successfully let a general LLM serve as a black-box search operator for decomposition-based MOEA (MOEA/D) in a zero-shot manner. In addition, by learning from the LLM behavior, we further design an explicit white-box operator with randomness and propose a new version of decomposition-based MOEA, termed MOEA/D-LO. Experimental studies on different test benchmarks show that our proposed method can achieve competitive performance with widely used MOEAs. It is also promising to see the operator only learned from a few instances can have robust generalization performance on unseen problems with quite different patterns and settings. The results reveal the potential benefits of using pre-trained LLMs in the design of MOEAs

    An Effective Ensemble Framework for Multi-Objective Optimization

    Get PDF
    This work was supported by the National Natural Science Foundation of China under Grants 61876110, 61876163, and 61836005, a grant from ANR/RGC Joint Research Scheme sponsored by the Research Grants Council of the Hong Kong Special Administrative Region, China and France National Research Agency (Project No. A-CityU101/16), the Joint Funds of the National Natural Science Foundation of China under Key Program Grant U1713212, and CONACyT grant no. 221551.Peer reviewedPostprin

    Scalarizing Functions in Decomposition-Based Multiobjective Evolutionary Algorithms

    Get PDF
    Decomposition-based multiobjective evolutionary algorithms (MOEAs) have received increasing research interests due to their high performance for solving multiobjective optimization problems. However, scalarizing functions (SFs), which play a crucial role in balancing diversity and convergence in these kinds of algorithms, have not been fully investigated. This paper is mainly devoted to presenting two new SFs and analyzing their effect in decomposition-based MOEAs. Additionally, we come up with an efficient framework for decomposition-based MOEAs based on the proposed SFs and some new strategies. Extensive experimental studies have demonstrated the effectiveness of the proposed SFs and algorithm

    Portfolio Optimization Using Evolutionary Algorithms

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsPortfolio optimization is a widely studied field in modern finance. It involves finding the optimal balance between two contradictory objectives, the risk and the return. As the number of assets rises, the complexity in portfolios increases considerably, making it a computational challenge. This report explores the application of the Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) and Genetic Algorithm (GA) in the field of portfolio optimization. MOEA/D and GA have proven to be effective at finding portfolios. However, it remains unclear how they perform when compared to traditional approaches used in finance. To achieve this, a framework for portfolio optimization is proposed, using MOEA/D, and GA separately as optimization algorithms and Capital Asset Pricing Model (CAPM) and Mean-Variance Model as methods to evaluate portfolios. The proposed framework is able to produce weighted portfolios successfully. These generated portfolios were evaluated using a simulation with subsequent (unseen) prices of the assets included in the portfolio. The simulation was compared with well known portfolios in the same market and other market benchmarks (Security Market Line and Market Portfolio). The results obtained in this investigation exceeded expectation by creating portfolios that perform better than the market. CAPM and Mean-Variance Model, although they fail to model all the variables that affect the stock market, provide a simple valuation for assets and portfolios. MOEA/D using Differential Evolution operators and the CAPM model produced the best portfolios in this research. Work can still be done to accommodate more variables that can affect markets and portfolios, such as taxes, investment horizon and costs for transactions

    Adaptive multimodal continuous ant colony optimization

    Get PDF
    Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization algorithms in preserving high diversity, this paper intends to extend ant colony optimization algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ant colony optimization algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima

    Towards Better Integration of Surrogate Models and Optimizers

    Get PDF
    Surrogate-Assisted Evolutionary Algorithms (SAEAs) have been proven to be very effective in solving (synthetic and real-world) computationally expensive optimization problems with a limited number of function evaluations. The two main components of SAEAs are: the surrogate model and the evolutionary optimizer, both of which use parameters to control their respective behavior. These parameters are likely to interact closely, and hence the exploitation of any such relationships may lead to the design of an enhanced SAEA. In this chapter, as a first step, we focus on Kriging and the Efficient Global Optimization (EGO) framework. We discuss potentially profitable ways of a better integration of model and optimizer. Furthermore, we investigate in depth how different parameters of the model and the optimizer impact optimization results. In particular, we determine whether there are any interactions between these parameters, and how the problem characteristics impact optimization results. In the experimental study, we use the popular Black-Box Optimization Benchmarking (BBOB) testbed. Interestingly, the analysis finds no evidence for significant interactions between model and optimizer parameters, but independently their performance has a significant interaction with the objective function. Based on our results, we make recommendations on how best to configure EGO
    corecore