Hyperparameter tuning plays a crucial role in optimizing the performance of
predictive learners. Cross--validation (CV) is a widely adopted technique for
estimating the error of different hyperparameter settings. Repeated
cross-validation (RCV) has been commonly employed to reduce the variability of
CV errors. In this paper, we introduce a novel approach called blocked
cross-validation (BCV), where the repetitions are blocked with respect to both
CV partition and the random behavior of the learner. Theoretical analysis and
empirical experiments demonstrate that BCV provides more precise error
estimates compared to RCV, even with a significantly reduced number of runs. We
present extensive examples using real--world data sets to showcase the
effectiveness and efficiency of BCV in hyperparameter tuning. Our results
indicate that BCV outperforms RCV in hyperparameter tuning, achieving greater
precision with fewer computations.Comment: 28 pages, 7 figure