Chalmers tekniska högskola / Institutionen för data och informationsteknik
Abstract
Life long learning from zero(LL0) is a lifelong learning algorithm that has a dynamic
neural network architecture. Many machine learning tools perform poorly
on dynamic structures due to the overhead of growing computational maps with
expanding networks. This thesis explores the possibility of delivering higher performance
for the LL0 algorithm compared to the existing PyTorch implementation
by developing a custom solution. This developed solution has a strongly coupled
mapping of the LL0 algorithm with the GPU to achieve hardware acceleration. A
set of benchmarks are defined to compare the performance of the between implementations.
Furthermore, the thesis develops a methodology to investigate potential bottlenecks
and parallelism with the implementation mapped to a GPU. The thesis achieves a
significant speedup of ×19.48 on the number of feedforward per unit of time, compared
with the similar PyTorch implementation, on an MNIST dataset