Uncertainty quantification (UQ) is important for reliability assessment and
enhancement of machine learning models. In deep learning, uncertainties arise
not only from data, but also from the training procedure that often injects
substantial noises and biases. These hinder the attainment of statistical
guarantees and, moreover, impose computational challenges on UQ due to the need
for repeated network retraining. Building upon the recent neural tangent kernel
theory, we create statistically guaranteed schemes to principally
\emph{quantify}, and \emph{remove}, the procedural uncertainty of
over-parameterized neural networks with very low computation effort. In
particular, our approach, based on what we call a procedural-noise-correcting
(PNC) predictor, removes the procedural uncertainty by using only \emph{one}
auxiliary network that is trained on a suitably labeled data set, instead of
many retrained networks employed in deep ensembles. Moreover, by combining our
PNC predictor with suitable light-computation resampling methods, we build
several approaches to construct asymptotically exact-coverage confidence
intervals using as low as four trained networks without additional overheads