During the past decades, evolutionary computation (EC) has demonstrated
promising potential in solving various complex optimization problems of
relatively small scales. Nowadays, however, ongoing developments in modern
science and engineering are bringing increasingly grave challenges to the
conventional EC paradigm in terms of scalability. As problem scales increase,
on the one hand, the encoding spaces (i.e., dimensions of the decision vectors)
are intrinsically larger; on the other hand, EC algorithms often require
growing numbers of function evaluations (and probably larger population sizes
as well) to work properly. To meet such emerging challenges, not only does it
require delicate algorithm designs, but more importantly, a high-performance
computing framework is indispensable. Hence, we develop a distributed
GPU-accelerated algorithm library -- EvoX. First, we propose a generalized
workflow for implementing general EC algorithms. Second, we design a scalable
computing framework for running EC algorithms on distributed GPU devices.
Third, we provide user-friendly interfaces to both researchers and
practitioners for benchmark studies as well as extended real-world
applications. To comprehensively assess the performance of EvoX, we conduct a
series of experiments, including: (i) scalability test via numerical
optimization benchmarks with problem dimensions/population sizes up to
millions; (ii) acceleration test via a neuroevolution task with multiple GPU
nodes; (iii) extensibility demonstration via the application to reinforcement
learning tasks on the OpenAI Gym. The code of EvoX is available at
https://github.com/EMI-Group/EvoX