2,180 research outputs found
MTrainS: Improving DLRM training efficiency using heterogeneous memories
Recommendation models are very large, requiring terabytes (TB) of memory
during training. In pursuit of better quality, the model size and complexity
grow over time, which requires additional training data to avoid overfitting.
This model growth demands a large number of resources in data centers. Hence,
training efficiency is becoming considerably more important to keep the data
center power demand manageable. In Deep Learning Recommendation Models (DLRM),
sparse features capturing categorical inputs through embedding tables are the
major contributors to model size and require high memory bandwidth. In this
paper, we study the bandwidth requirement and locality of embedding tables in
real-world deployed models. We observe that the bandwidth requirement is not
uniform across different tables and that embedding tables show high temporal
locality. We then design MTrainS, which leverages heterogeneous memory,
including byte and block addressable Storage Class Memory for DLRM
hierarchically. MTrainS allows for higher memory capacity per node and
increases training efficiency by lowering the need to scale out to multiple
hosts in memory capacity bound use cases. By optimizing the platform memory
hierarchy, we reduce the number of nodes for training by 4-8X, saving power and
cost of training while meeting our target training performance
- …