39 research outputs found

    Multi-objective Optimization by Uncrowded Hypervolume Gradient Ascent

    Get PDF
    Evolutionary algorithms (EAs) are the preferred method for solving black-box multi-objective optimization problems, but when gradients of the objective functions are available, it is not straightforward to exploit these efficiently. By contrast, gradient-based optimization is well-established for single-objective optimization. A single-objective reformulation of the multi-objective problem could therefore offer a solution. Of particular interest to this end is the recently introduced uncrowded hypervolume (UHV) indicator, which takes into account dominated solutions. In this work, we show that the gradient of the UHV can often be computed, which allows for a direct application of gradient ascent algorithms. We compare this new approach with two EAs for UHV optimization as well as with one gradient-based algorithm for optimizing the well-established hypervolume. On several bi-objective benchmarks, we find that gradient-based algorithms outperform the tested EAs by obtaining a better hypervolume with fewer evaluations whenever exact gradients of the multiple objective functions are available and in case of small evaluation budgets. For larger budgets, however, EAs perform similarly or better. We further find that, when finite differences are used to approximate the gradients of the multiple objectives, our new gradient-based algorithm is still competitive with EAs in most considered benchmarks. Implementations are available at https://github.com/scmaree/uncrowded-hypervolume.Comment: T.M.D. and S.C.M. contributed equally. The final authenticated version is available in the conference proceedings of Parallel Problem Solving from Nature - PPSN XVI. Changes in new version: removed statement about Pareto compliance in abstract; added related work; corrected minor mistake

    An ensemble indicator-based density estimator for evolutionary multi-objective optimization

    Get PDF
    International audienceEnsemble learning is one of the most employed methods in machine learning. Its main ground is the construction of stronger mechanisms based on the combination of elementary ones. In this paper, we employ AdaBoost, which is one of the most well-known ensemble methods, to generate an ensemble indicator-based density estimator for multi-objective optimization. It combines the search properties of five density estimators, based on the hypervolume, R2, IGD+, ε+, and ∆p quality indicators. Through the multi-objective evolutionary search process, the proposed ensemble mechanism adapts itself using a learning process that takes the preferences of the underlying quality indicators into account. The proposed method gives rise to the ensemble indicator-based multi-objective evolutionary algorithm (EIB-MOEA) that shows a robust performance on different multi-objective optimization problems when compared with respect to several existing indicator-based multi-objective evolutionary algorithms
    corecore