2 research outputs found
Quantifying error contributions of computational steps, algorithms and hyperparameter choices in image classification pipelines
Data science relies on pipelines that are organized in the form of
interdependent computational steps. Each step consists of various candidate
algorithms that maybe used for performing a particular function. Each algorithm
consists of several hyperparameters. Algorithms and hyperparameters must be
optimized as a whole to produce the best performance. Typical machine learning
pipelines typically consist of complex algorithms in each of the steps. Not
only is the selection process combinatorial, but it is also important to
interpret and understand the pipelines. We propose a method to quantify the
importance of different layers in the pipeline, by computing an error
contribution relative to an agnostic choice of algorithms in that layer. We
demonstrate our methodology on image classification pipelines. The agnostic
methodology quantifies the error contributions from the computational steps,
algorithms and hyperparameters in the image classification pipeline. We show
that algorithm selection and hyper-parameter optimization methods can be used
to quantify the error contribution and that random search is able to quantify
the contribution more accurately than Bayesian optimization. This methodology
can be used by domain experts to understand machine learning and data analysis
pipelines in terms of their individual components, which can help in
prioritizing different components of the pipeline.Comment: arXiv admin note: substantial text overlap with arXiv:1903.0040
Bayesian optimization for modular black-box systems with switching costs
Most existing black-box optimization methods assume that all variables in the
system being optimized have equal cost and can change freely at each iteration.
However, in many real world systems, inputs are passed through a sequence of
different operations or modules, making variables in earlier stages of
processing more costly to update. Such structure imposes a cost on switching
variables in early parts of a data processing pipeline. In this work, we
propose a new algorithm for switch cost-aware optimization called Lazy Modular
Bayesian Optimization (LaMBO). This method efficiently identifies the global
optimum while minimizing cost through a passive change of variables in early
modules. The method is theoretical grounded and achieves vanishing regret when
augmented with switching cost. We apply LaMBO to multiple synthetic functions
and a three-stage image segmentation pipeline used in a neuroscience
application, where we obtain promising improvements over prevailing cost-aware
Bayesian optimization algorithms. Our results demonstrate that LaMBO is an
effective strategy for black-box optimization that is capable of minimizing
switching costs in modular systems