Several methods exist today to accelerate Machine Learning(ML) or
Deep-Learning(DL) model performance for training and inference. However, modern
techniques that rely on various graph and operator parallelism methodologies
rely on search space optimizations which are costly in terms of power and
hardware usage. Especially in the case of inference, when the batch size is 1
and execution is on CPUs or for power-constrained edge devices, current
techniques can become costly, complicated or inapplicable. To ameliorate this,
we present a Critical-Path-based Linear Clustering approach to exploit inherent
parallel paths in ML dataflow graphs. Our task parallelization approach further
optimizes the structure of graphs via cloning and prunes them via constant
propagation and dead-code elimination. Contrary to other work, we generate
readable and executable parallel Pytorch+Python code from input ML models in
ONNX format via a new tool that we have built called {\bf Ramiel}. This allows
us to benefit from other downstream acceleration techniques like intra-op
parallelism and potentially pipeline parallelism. Our preliminary results on
several ML graphs demonstrate up to 1.9× speedup over serial execution
and outperform some of the current mechanisms in both compile and runtimes.
Lastly, our methods are lightweight and fast enough so that they can be used
effectively for power and resource-constrained devices, while still enabling
downstream optimizations