4 research outputs found
Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models
Deep learning recommendation models (DLRMs) are used across many
business-critical services at Facebook and are the single largest AI
application in terms of infrastructure demand in its data-centers. In this
paper we discuss the SW/HW co-designed solution for high-performance
distributed training of large-scale DLRMs. We introduce a high-performance
scalable software stack based on PyTorch and pair it with the new evolution of
Zion platform, namely ZionEX. We demonstrate the capability to train very large
DLRMs with up to 12 Trillion parameters and show that we can attain 40X speedup
in terms of time to solution over previous systems. We achieve this by (i)
designing the ZionEX platform with dedicated scale-out network, provisioned
with high bandwidth, optimal topology and efficient transport (ii) implementing
an optimized PyTorch-based training stack supporting both model and data
parallelism (iii) developing sharding algorithms capable of hierarchical
partitioning of the embedding tables along row, column dimensions and load
balancing them across multiple workers; (iv) adding high-performance core
operators while retaining flexibility to support optimizers with fully
deterministic updates (v) leveraging reduced precision communications,
multi-level memory hierarchy (HBM+DDR+SSD) and pipelining. Furthermore, we
develop and briefly comment on distributed data ingestion and other supporting
services that are required for the robust and efficient end-to-end training in
production environments
Integer factor based SVPWM approach for multilevel inverters with continuous and discontinuous switching sequences
The most extensively employed strategy to control the AC output of power electronic inverters is the pulse width modulation (PWM) strategy. Since three decades modulation hypothesis continues to draw considerable attention and interest of researchers with the aim to reduce harmonic distortion and increased output magnitude for a given switching frequency. Among different PWM techniques space vector modulation (SVM) is very popular. However, as the number of output levels of the multilevel inverter (MLI) increases, the implementation of SVM becomes more difficult, because as the number of levels increases the total number of switches in the inverter increases which will increase the total number of switching states, which will result in increased computational complexity and increased storage requirements of switching states and switching pulse durations. The present work aims at reducing the complexity of implementing the space vector pulse width modulation (SVPWM) technique in multilevel inverters by using a generalized integer factor approach (IFA). The performance of the IFA is tested on a three-level inverter-fed induction motor for conventional PWM (CPWM) which is a continuous SVPWM method employing a 0127 sequence and discontinuous PWM (DPWM) methods viz, DPWMMIN using 012 sequences and DPWMMAX using a 721 sequence