2 research outputs found
Evaluating XGBoost for Balanced and Imbalanced Data: Application to Fraud Detection
This paper evaluates XGboost's performance given different dataset sizes and
class distributions, from perfectly balanced to highly imbalanced. XGBoost has
been selected for evaluation, as it stands out in several benchmarks due to its
detection performance and speed. After introducing the problem of fraud
detection, the paper reviews evaluation metrics for detection systems or binary
classifiers, and illustrates with examples how different metrics work for
balanced and imbalanced datasets. Then, it examines the principles of XGBoost.
It proposes a pipeline for data preparation and compares a Vanilla XGBoost
against a random search-tuned XGBoost. Random search fine-tuning provides
consistent improvement for large datasets of 100 thousand samples, not so for
medium and small datasets of 10 and 1 thousand samples, respectively. Besides,
as expected, XGBoost recognition performance improves as more data is
available, and deteriorates detection performance as the datasets become more
imbalanced. Tests on distributions with 50, 45, 25, and 5 percent positive
samples show that the largest drop in detection performance occurs for the
distribution with only 5 percent positive samples. Sampling to balance the
training set does not provide consistent improvement. Therefore, future work
will include a systematic study of different techniques to deal with data
imbalance and evaluating other approaches, including graphs, autoencoders, and
generative adversarial methods, to deal with the lack of labels.Comment: 17 pages, 8 figures, 9 tables, Presented at NVIDIA GTC, The
Conference for the Era of AI and the Metaverse, March 23, 2023. [S51129
Tree boosting methods for balanced and imbalanced classification and their robustness over time in risk assessment
Most real-world classification problems deal with imbalanced datasets, posing a challenge for Artificial Intelligence (AI), i.e., machine learning algorithms, because the minority class, which is of extreme interest, often proves difficult to be detected. This paper empirically evaluates tree boosting methods' performance given different dataset sizes and class distributions, from perfectly balanced to highly imbalanced. For tabular data, tree-based methods such as XGBoost, stand out in several benchmarks due to detection performance and speed. Therefore, XGBoost and Imbalance-XGBoost are evaluated. After introducing the motivation to address risk assessment with machine learning, the paper reviews evaluation metrics for detection systems or binary classifiers. It proposes a method for data preparation followed by tree boosting methods including hyper-parameter optimization. The method is evaluated on private datasets of 1 thousand (K), 10K and 100K samples on distributions with 50, 45, 25, and 5 percent positive samples. As expected, the developed method increases its recognition performance as more data is given for training and the F1 score decreases as the data distribution becomes more imbalanced, but it is still significantly superior to the baseline of precision-recall determined by the ratio of positives divided by positives and negatives. Sampling to balance the training set does not provide consistent improvement and deteriorates detection. In contrast, classifier hyper-parameter optimization improves recognition, but should be applied carefully depending on data volume and distribution. Finally, the developed method is robust to data variation over time up to some point. Retraining can be used when performance starts deteriorating