Recent advances in Neural Architecture Search (NAS) such as one-shot NAS
offer the ability to extract specialized hardware-aware sub-network
configurations from a task-specific super-network. While considerable effort
has been employed towards improving the first stage, namely, the training of
the super-network, the search for derivative high-performing sub-networks is
still under-explored. Popular methods decouple the super-network training from
the sub-network search and use performance predictors to reduce the
computational burden of searching on different hardware platforms. We propose a
flexible search framework that automatically and efficiently finds optimal
sub-networks that are optimized for different performance metrics and hardware
configurations. Specifically, we show how evolutionary algorithms can be paired
with lightly trained objective predictors in an iterative cycle to accelerate
architecture search in a multi-objective setting for various modalities
including machine translation and image classification