The problem of predicting the training time of machine learning (ML) models
has become extremely relevant in the scientific community. Being able to
predict a priori the training time of an ML model would enable the automatic
selection of the best model both in terms of energy efficiency and in terms of
performance in the context of, for instance, MLOps architectures. In this
paper, we present the work we are conducting towards this direction. In
particular, we present an extensive empirical study of the Full Parameter Time
Complexity (FPTC) approach by Zheng et al., which is, to the best of our
knowledge, the only approach formalizing the training time of ML models as a
function of both dataset's and model's parameters. We study the formulations
proposed for the Logistic Regression and Random Forest classifiers, and we
highlight the main strengths and weaknesses of the approach. Finally, we
observe how, from the conducted study, the prediction of training time is
strictly related to the context (i.e., the involved dataset) and how the FPTC
approach is not generalizable