6 research outputs found
HetSeq: Distributed GPU Training on Heterogeneous Infrastructure
Modern deep learning systems like PyTorch and Tensorflow are able to train
enormous models with billions (or trillions) of parameters on a distributed
infrastructure. These systems require that the internal nodes have the same
memory capacity and compute performance. Unfortunately, most organizations,
especially universities, have a piecemeal approach to purchasing computer
systems resulting in a heterogeneous infrastructure, which cannot be used to
compute large models. The present work describes HetSeq, a software package
adapted from the popular PyTorch package that provides the capability to train
large neural network models on heterogeneous infrastructure. Experiments with
transformer translation and BERT language model shows that HetSeq scales over
heterogeneous systems. HetSeq can be easily extended to other models like image
classification. Package with supported document is publicly available at
https://github.com/yifding/hetseq.Comment: 7 pages, 3 tables, 2 figure