2 research outputs found
Shared Microexponents: A Little Shifting Goes a Long Way
This paper introduces Block Data Representations (BDR), a framework for
exploring and evaluating a wide spectrum of narrow-precision formats for deep
learning. It enables comparison of popular quantization standards, and through
BDR, new formats based on shared microexponents (MX) are identified, which
outperform other state-of-the-art quantization approaches, including
narrow-precision floating-point and block floating-point. MX utilizes multiple
levels of quantization scaling with ultra-fine scaling factors based on shared
microexponents in the hardware. The effectiveness of MX is demonstrated on
real-world models including large-scale generative pretraining and inferencing,
and production-scale recommendation systems
Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models
Deep learning recommendation models (DLRMs) are used across many
business-critical services at Facebook and are the single largest AI
application in terms of infrastructure demand in its data-centers. In this
paper we discuss the SW/HW co-designed solution for high-performance
distributed training of large-scale DLRMs. We introduce a high-performance
scalable software stack based on PyTorch and pair it with the new evolution of
Zion platform, namely ZionEX. We demonstrate the capability to train very large
DLRMs with up to 12 Trillion parameters and show that we can attain 40X speedup
in terms of time to solution over previous systems. We achieve this by (i)
designing the ZionEX platform with dedicated scale-out network, provisioned
with high bandwidth, optimal topology and efficient transport (ii) implementing
an optimized PyTorch-based training stack supporting both model and data
parallelism (iii) developing sharding algorithms capable of hierarchical
partitioning of the embedding tables along row, column dimensions and load
balancing them across multiple workers; (iv) adding high-performance core
operators while retaining flexibility to support optimizers with fully
deterministic updates (v) leveraging reduced precision communications,
multi-level memory hierarchy (HBM+DDR+SSD) and pipelining. Furthermore, we
develop and briefly comment on distributed data ingestion and other supporting
services that are required for the robust and efficient end-to-end training in
production environments