We present a versatile GPU-based parallel version of Logistic Regression
(LR), aiming to address the increasing demand for faster algorithms in binary
classification due to large data sets. Our implementation is a direct
translation of the parallel Gradient Descent Logistic Regression algorithm
proposed by X. Zou et al. [12]. Our experiments demonstrate that our GPU-based
LR outperforms existing CPU-based implementations in terms of execution time
while maintaining comparable f1 score. The significant acceleration of
processing large datasets makes our method particularly advantageous for
real-time prediction applications like image recognition, spam detection, and
fraud detection. Our algorithm is implemented in a ready-to-use Python library
available at : https://github.com/NechbaMohammed/SwiftLogisticRe