7 research outputs found
Automated Design Space Exploration for optimised Deployment of DNN on Arm Cortex-A CPUs
The spread of deep learning on embedded devices has prompted the development
of numerous methods to optimise the deployment of deep neural networks (DNN).
Works have mainly focused on: i) efficient DNN architectures, ii) network
optimisation techniques such as pruning and quantisation, iii) optimised
algorithms to speed up the execution of the most computational intensive layers
and, iv) dedicated hardware to accelerate the data flow and computation.
However, there is a lack of research on cross-level optimisation as the space
of approaches becomes too large to test and obtain a globally optimised
solution. Thus, leading to suboptimal deployment in terms of latency, accuracy,
and memory. In this work, we first detail and analyse the methods to improve
the deployment of DNNs across the different levels of software optimisation.
Building on this knowledge, we present an automated exploration framework to
ease the deployment of DNNs. The framework relies on a Reinforcement Learning
search that, combined with a deep learning inference framework, automatically
explores the design space and learns an optimised solution that speeds up the
performance and reduces the memory on embedded CPU platforms. Thus, we present
a set of results for state-of-the-art DNNs on a range of Arm Cortex-A CPU
platforms achieving up to 4x improvement in performance and over 2x reduction
in memory with negligible loss in accuracy with respect to the BLAS
floating-point implementation
DLAS: An Exploration and Assessment of the Deep Learning Acceleration Stack
Deep Neural Networks (DNNs) are extremely computationally demanding, which
presents a large barrier to their deployment on resource-constrained devices.
Since such devices are where many emerging deep learning applications lie
(e.g., drones, vision-based medical technology), significant bodies of work
from both the machine learning and systems communities have attempted to
provide optimizations to accelerate DNNs. To help unify these two perspectives,
in this paper we combine machine learning and systems techniques within the
Deep Learning Acceleration Stack (DLAS), and demonstrate how these layers can
be tightly dependent on each other with an across-stack perturbation study. We
evaluate the impact on accuracy and inference time when varying different
parameters of DLAS across two datasets, seven popular DNN architectures, four
DNN compression techniques, three algorithmic primitives with sparse and dense
variants, untuned and auto-scheduled code generation, and four hardware
platforms. Our evaluation highlights how perturbations across DLAS parameters
can cause significant variation and across-stack interactions. The highest
level observation from our evaluation is that the model size, accuracy, and
inference time are not guaranteed to be correlated. Overall we make 13 key
observations, including that speedups provided by compression techniques are
very hardware dependent, and that compiler auto-tuning can significantly alter
what the best algorithm to use for a given configuration is. With DLAS, we aim
to provide a reference framework to aid machine learning and systems
practitioners in reasoning about the context in which their respective DNN
acceleration solutions exist in. With our evaluation strongly motivating the
need for co-design, we believe that DLAS can be a valuable concept for
exploring the next generation of co-designed accelerated deep learning
solutions