1 research outputs found
RecD: Deduplication for End-to-End Deep Learning Recommendation Model Training Infrastructure
We present RecD (Recommendation Deduplication), a suite of end-to-end
infrastructure optimizations across the Deep Learning Recommendation Model
(DLRM) training pipeline. RecD addresses immense storage, preprocessing, and
training overheads caused by feature duplication inherent in industry-scale
DLRM training datasets. Feature duplication arises because DLRM datasets are
generated from interactions. While each user session can generate multiple
training samples, many features' values do not change across these samples. We
demonstrate how RecD exploits this property, end-to-end, across a deployed
training pipeline. RecD optimizes data generation pipelines to decrease dataset
storage and preprocessing resource demands and to maximize duplication within a
training batch. RecD introduces a new tensor format, InverseKeyedJaggedTensors
(IKJTs), to deduplicate feature values in each batch. We show how DLRM model
architectures can leverage IKJTs to drastically increase training throughput.
RecD improves the training and preprocessing throughput and storage efficiency
by up to 2.48x, 1.79x, and 3.71x, respectively, in an industry-scale DLRM
training system.Comment: Published in the Proceedings of the Sixth Conference on Machine
Learning and Systems (MLSys 2023