Optimizing multiple competing objectives is a common problem across science
and industry. The inherent inextricable trade-off between those objectives
leads one to the task of exploring their Pareto front. A meaningful quantity
for the purpose of the latter is the hypervolume indicator, which is used in
Bayesian Optimization (BO) and Evolutionary Algorithms (EAs). However, the
computational complexity for the calculation of the hypervolume scales
unfavorably with increasing number of objectives and data points, which
restricts its use in those common multi-objective optimization frameworks. To
overcome these restrictions we propose to approximate the hypervolume function
with a deep neural network, which we call DeepHV. For better sample efficiency
and generalization, we exploit the fact that the hypervolume is
scale-equivariant in each of the objectives as well as permutation invariant
w.r.t. both the objectives and the samples, by using a deep neural network that
is equivariant w.r.t. the combined group of scalings and permutations. We
evaluate our method against exact, and approximate hypervolume methods in terms
of accuracy, computation time, and generalization. We also apply and compare
our methods to state-of-the-art multi-objective BO methods and EAs on a range
of synthetic benchmark test cases. The results show that our methods are
promising for such multi-objective optimization tasks.Comment: Updated with camera-ready version. Accepted at ICLR 202