'Institute of Electrical and Electronics Engineers (IEEE)'
Abstract
Due to the curse of dimensionality of search
space, it is extremely difficult for evolutionary algorithms to approximate the optimal solutions of large-scale multiobjective
optimization problems (LMOPs) by using a limited
budget of evaluations. If the Pareto optimal subspace is
approximated during the evolutionary process, the search
space can be reduced and the difficulty encountered by
evolutionary algorithms can be highly alleviated. Following the above idea, this paper proposes an evolutionary algorithm to solve sparse LMOPs by learning the Pareto optimal
subspace. The proposed algorithm uses two unsupervised
neural networks, a restricted Boltzmann machine and a
denoising autoencoder to learn a sparse distribution and a
compact representation of the decision variables, where the
combination of the learnt sparse distribution and compact
representation is regarded as an approximation of the Pareto optimal subspace. The genetic operators are conducted in the
learnt subspace, and the resultant offspring solutions then can be mapped back to the original search space by the two neural networks. According to the experimental results on eight benchmark problems and eight real-world problems,the proposed algorithm can effectively solve sparse LMOPs
with 10,000 decision variables by only 100,000 evaluations