7 research outputs found
Demand Layering for Real-Time DNN Inference with Minimized Memory Usage
When executing a deep neural network (DNN), its model parameters are loaded
into GPU memory before execution, incurring a significant GPU memory burden.
There are studies that reduce GPU memory usage by exploiting CPU memory as a
swap device. However, this approach is not applicable in most embedded systems
with integrated GPUs where CPU and GPU share a common memory. In this regard,
we present Demand Layering, which employs a fast solid-state drive (SSD) as a
co-running partner of a GPU and exploits the layer-by-layer execution of DNNs.
In our approach, a DNN is loaded and executed in a layer-by-layer manner,
minimizing the memory usage to the order of a single layer. Also, we developed
a pipeline architecture that hides most additional delays caused by the
interleaved parameter loadings alongside layer executions. Our implementation
shows a 96.5% memory reduction with just 14.8% delay overhead on average for
representative DNNs. Furthermore, by exploiting the memory-delay tradeoff,
near-zero delay overhead (under 1 ms) can be achieved with a slightly increased
memory usage (still an 88.4% reduction), showing the great potential of Demand
Layering.Comment: 14 pages, 16 figures. Accepted to the 43rd IEEE Real-Time Systems
Symposium (RTSS), 202