2 research outputs found
Tree ensemble kernels for Bayesian optimization with known constraints over mixed-feature spaces
Tree ensembles can be well-suited for black-box optimization tasks such as
algorithm tuning and neural architecture search, as they achieve good
predictive performance with little or no manual tuning, naturally handle
discrete feature spaces, and are relatively insensitive to outliers in the
training data. Two well-known challenges in using tree ensembles for black-box
optimization are (i) effectively quantifying model uncertainty for exploration
and (ii) optimizing over the piece-wise constant acquisition function. To
address both points simultaneously, we propose using the kernel interpretation
of tree ensembles as a Gaussian Process prior to obtain model variance
estimates, and we develop a compatible optimization formulation for the
acquisition function. The latter further allows us to seamlessly integrate
known constraints to improve sampling efficiency by considering
domain-knowledge in engineering settings and modeling search space symmetries,
e.g., hierarchical relationships in neural architecture search. Our framework
performs as well as state-of-the-art methods for unconstrained black-box
optimization over continuous/discrete features and outperforms competing
methods for problems combining mixed-variable feature spaces and known input
constraints.Comment: 27 pages, 9 figures, 4 table