38 research outputs found
Failure Tolerant Training with Persistent Memory Disaggregation over CXL
This paper proposes TRAININGCXL that can efficiently process large-scale
recommendation datasets in the pool of disaggregated memory while making
training fault tolerant with low overhead. To this end, i) we integrate
persistent memory (PMEM) and GPU into a cache-coherent domain as Type-2.
Enabling CXL allows PMEM to be directly placed in GPU's memory hierarchy, such
that GPU can access PMEM without software intervention. TRAININGCXL introduces
computing and checkpointing logic near the CXL controller, thereby training
data and managing persistency in an active manner. Considering PMEM's
vulnerability, ii) we utilize the unique characteristics of recommendation
models and take the checkpointing overhead off the critical path of their
training. Lastly, iii) TRAININGCXL employs an advanced checkpointing technique
that relaxes the updating sequence of model parameters and embeddings across
training batches. The evaluation shows that TRAININGCXL achieves 5.2x training
performance improvement and 76% energy savings, compared to the modern
PMEM-based recommendation systems
GraphTensor: Comprehensive GNN-Acceleration Framework for Efficient Parallel Processing of Massive Datasets
We present GraphTensor, a comprehensive open-source framework that supports
efficient parallel neural network processing on large graphs. GraphTensor
offers a set of easy-to-use programming primitives that appreciate both graph
and neural network execution behaviors from the beginning (graph sampling) to
the end (dense data processing). Our framework runs diverse graph neural
network (GNN) models in a destination-centric, feature-wise manner, which can
significantly shorten training execution times in a GPU. In addition,
GraphTensor rearranges multiple GNN kernels based on their system
hyperparameters in a self-governing manner, thereby reducing the processing
dimensionality and the latencies further. From the end-to-end execution
viewpoint, GraphTensor significantly shortens the service-level GNN latency by
applying pipeline parallelism for efficient graph dataset preprocessing. Our
evaluation shows that GraphTensor exhibits 1.4x better training performance
than emerging GNN frameworks under the execution of large-scale, real-world
graph workloads. For the end-to-end services, GraphTensor reduces training
latencies of an advanced version of the GNN frameworks (optimized for
multi-threaded graph sampling) by 2.4x, on average