4 research outputs found
Re-purposing Heterogeneous Generative Ensembles with Evolutionary Computation
Generative Adversarial Networks (GANs) are popular tools for generative
modeling. The dynamics of their adversarial learning give rise to convergence
pathologies during training such as mode and discriminator collapse. In machine
learning, ensembles of predictors demonstrate better results than a single
predictor for many tasks. In this study, we apply two evolutionary algorithms
(EAs) to create ensembles to re-purpose generative models, i.e., given a set of
heterogeneous generators that were optimized for one objective (e.g., minimize
Frechet Inception Distance), create ensembles of them for optimizing a
different objective (e.g., maximize the diversity of the generated samples).
The first method is restricted by the exact size of the ensemble and the second
method only restricts the upper bound of the ensemble size. Experimental
analysis on the MNIST image benchmark demonstrates that both EA ensembles
creation methods can re-purpose the models, without reducing their original
functionality. The EA-based demonstrate significantly better performance
compared to other heuristic-based methods. When comparing both evolutionary,
the one with only an upper size bound on the ensemble size is the best.Comment: Accepted as a full paper for the Genetic and Evolutionary Computation
Conference - GECCO'2