Learning domain-invariant representations has become a popular approach to
unsupervised domain adaptation and is often justified by invoking a particular
suite of theoretical results. We argue that there are two significant flaws in
such arguments. First, the results in question hold only for a fixed
representation and do not account for information lost in non-invertible
transformations. Second, domain invariance is often a far too strict
requirement and does not always lead to consistent estimation, even under
strong and favorable assumptions. In this work, we give generalization bounds
for unsupervised domain adaptation that hold for any representation function by
acknowledging the cost of non-invertibility. In addition, we show that
penalizing distance between densities is often wasteful and propose a bound
based on measuring the extent to which the support of the source domain covers
the target domain. We perform experiments on well-known benchmarks that
illustrate the short-comings of current standard practice