In the domain of machine learning algorithms, the significance of the loss
function is paramount, especially in supervised learning tasks. It serves as a
fundamental pillar that profoundly influences the behavior and efficacy of
supervised learning algorithms. Traditional loss functions, while widely used,
often struggle to handle noisy and high-dimensional data, impede model
interpretability, and lead to slow convergence during training. In this paper,
we address the aforementioned constraints by proposing a novel robust, bounded,
sparse, and smooth (RoBoSS) loss function for supervised learning. Further, we
incorporate the RoBoSS loss function within the framework of support vector
machine (SVM) and introduce a new robust algorithm named
Lrbss-SVM. For the theoretical analysis, the
classification-calibrated property and generalization ability are also
presented. These investigations are crucial for gaining deeper insights into
the performance of the RoBoSS loss function in the classification tasks and its
potential to generalize well to unseen data. To empirically demonstrate the
effectiveness of the proposed Lrbss-SVM, we evaluate it on 88
real-world UCI and KEEL datasets from diverse domains. Additionally, to
exemplify the effectiveness of the proposed Lrbss-SVM within the
biomedical realm, we evaluated it on two medical datasets: the
electroencephalogram (EEG) signal dataset and the breast cancer (BreaKHis)
dataset. The numerical results substantiate the superiority of the proposed
Lrbss-SVM model, both in terms of its remarkable generalization
performance and its efficiency in training time