Bayesian Neural Network (BNN) offers a more principled, robust, and
interpretable framework for analyzing high-dimensional data. They address the
typical challenges associated with conventional deep learning methods, such as
data insatiability, ad-hoc nature, and susceptibility to overfitting. However,
their implementation typically relies on Markov chain Monte Carlo (MCMC)
methods that are characterized by their computational intensity and
inefficiency in a high-dimensional space. To address this issue, we propose a
novel Calibration-Emulation-Sampling (CES) strategy to significantly enhance
the computational efficiency of BNN. In this CES framework, during the initial
calibration stage, we collect a small set of samples from the parameter space.
These samples serve as training data for the emulator. Here, we employ a Deep
Neural Network (DNN) emulator to approximate the forward mapping, i.e., the
process that input data go through various layers to generate predictions. The
trained emulator is then used for sampling from the posterior distribution at
substantially higher speed compared to the original BNN. Using simulated and
real data, we demonstrate that our proposed method improves computational
efficiency of BNN, while maintaining similar performance in terms of prediction
accuracy and uncertainty quantification.Comment: 13 page