Recent work addressing model reliability and generalization has resulted in a
variety of methods that seek to proactively address differences between the
training and unknown target environments. While most methods achieve this by
finding distributions that will be invariant across environments, we will show
they do not necessarily find the same distributions which has implications for
performance. In this paper we unify existing work on prediction using stable
distributions by relating environmental shifts to edges in the graph underlying
a prediction problem, and characterize stable distributions as those which
effectively remove these edges. We then quantify the effect of edge deletion on
performance in the linear case and corroborate the findings in a simulated and
real data experiment