Support Vector Machine (SVM) algorithm requires a high computational cost
(both in memory and time) to solve a complex quadratic programming (QP)
optimization problem during the training process. Consequently, SVM
necessitates high computing hardware capabilities. The central processing unit
(CPU) clock frequency cannot be increased due to physical limitations in the
miniaturization process. However, the potential of parallel multi-architecture,
available in both multi-core CPUs and highly scalable GPUs, emerges as a
promising solution to enhance algorithm performance. Therefore, there is an
opportunity to reduce the high computational time required by SVM for solving
the QP optimization problem. This paper presents a comparative study that
implements the SVM algorithm on different parallel architecture frameworks. The
experimental results show that SVM MPI-CUDA implementation achieves a speedup
over SVM TensorFlow implementation on different datasets. Moreover, SVM
TensorFlow implementation provides a cross-platform solution that can be
migrated to alternative hardware components, which will reduces the development
time.Comment: 5 page