The ability of an agent to do well in new environments is a critical aspect
of intelligence. In machine learning, this ability is known as
strong or out-of-distribution generalization. However,
merely considering differences in data distributions is inadequate for fully
capturing differences between learning environments. In the present paper, we
investigate out-of-variable generalization, which pertains to an
agent's generalization capabilities concerning environments with variables that
were never jointly observed before. This skill closely reflects the process of
animate learning: we, too, explore Nature by probing, observing, and measuring
subsets of variables at any given time. Mathematically,
out-of-variable generalization requires the efficient re-use of past
marginal information, i.e., information over subsets of previously observed
variables. We study this problem, focusing on prediction tasks across
environments that contain overlapping, yet distinct, sets of causes. We show
that after fitting a classifier, the residual distribution in one environment
reveals the partial derivative of the true generating function with respect to
the unobserved causal parent in that environment. We leverage this information
and propose a method that exhibits non-trivial out-of-variable generalization
performance when facing an overlapping, yet distinct, set of causal predictors