Multimodal machine learning has achieved remarkable progress in a wide range
of scenarios. However, the reliability of multimodal learning remains largely
unexplored. In this paper, through extensive empirical studies, we identify
current multimodal classification methods suffer from unreliable predictive
confidence that tend to rely on partial modalities when estimating confidence.
Specifically, we find that the confidence estimated by current models could
even increase when some modalities are corrupted. To address the issue, we
introduce an intuitive principle for multimodal learning, i.e., the confidence
should not increase when one modality is removed. Accordingly, we propose a
novel regularization technique, i.e., Calibrating Multimodal Learning (CML)
regularization, to calibrate the predictive confidence of previous methods.
This technique could be flexibly equipped by existing models and improve the
performance in terms of confidence calibration, classification accuracy, and
model robustness