Derivative-free optimization algorithms play an important role in scientific
and engineering design optimization problems, especially when derivative
information is not accessible. In this paper, we study the framework of
classification-based derivative-free optimization algorithms. By introducing a
concept called hypothesis-target shattering rate, we revisit the computational
complexity upper bound of this type of algorithms. Inspired by the revisited
upper bound, we propose an algorithm named "RACE-CARS", which adds a random
region-shrinking step compared with "SRACOS" (Hu et al., 2017).. We further
establish a theorem showing the acceleration of region-shrinking. Experiments
on the synthetic functions as well as black-box tuning for
language-model-as-a-service demonstrate empirically the efficiency of
"RACE-CARS". An ablation experiment on the introduced hyperparameters is also
conducted, revealing the mechanism of "RACE-CARS" and putting forward an
empirical hyperparameter-tuning guidance