Machine learning (ML) models for molecules and materials commonly rely on a
decomposition of the global target quantity into local, atom-centered
contributions. This approach is convenient from a computational perspective,
enabling large-scale ML-driven simulations with a linear-scaling cost, and also
allow for the identification and post-hoc interpretation of contributions from
individual chemical environments and motifs to complicated macroscopic
properties. However, even though there exist practical justifications for these
decompositions, only the global quantity is rigorously defined, and thus it is
unclear to what extent the atomistic terms predicted by the model can be
trusted. Here, we introduce a quantitative metric, which we call the local
prediction rigidity (LPR), that allows one to assess how robust the locally
decomposed predictions of ML models are. We investigate the dependence of LPR
on the aspects of model training, particularly the composition of training
dataset, for a range of different problems from simple toy models to real
chemical systems. We present strategies to systematically enhance the LPR,
which can be used to improve the robustness, interpretability, and
transferability of atomistic ML models