1 research outputs found
Parallelizing Training of Deep Generative Models on Massive Scientific Datasets
Training deep neural networks on large scientific data is a challenging task
that requires enormous compute power, especially if no pre-trained models exist
to initialize the process. We present a novel tournament method to train
traditional as well as generative adversarial networks built on LBANN, a
scalable deep learning framework optimized for HPC systems. LBANN combines
multiple levels of parallelism and exploits some of the worlds largest
supercomputers. We demonstrate our framework by creating a complex predictive
model based on multi-variate data from high-energy-density physics containing
hundreds of millions of images and hundreds of millions of scalar values
derived from tens of millions of simulations of inertial confinement fusion.
Our approach combines an HPC workflow and extends LBANN with optimized data
ingestion and the new tournament-style training algorithm to produce a scalable
neural network architecture using a CORAL-class supercomputer. Experimental
results show that 64 trainers (1024 GPUs) achieve a speedup of 70.2 over a
single trainer (16 GPUs) baseline, and an effective 109% parallel efficiency