Images seen during test time are often not from the same distribution as
images used for learning. This problem, known as domain shift, occurs when
training classifiers from object-centric internet image databases and trying to
apply them directly to scene understanding tasks. The consequence is often
severe performance degradation and is one of the major barriers for the
application of classifiers in real-world systems. In this paper, we show how to
learn transform-based domain adaptation classifiers in a scalable manner. The
key idea is to exploit an implicit rank constraint, originated from a
max-margin domain adaptation formulation, to make optimization tractable.
Experiments show that the transformation between domains can be very
efficiently learned from data and easily applied to new categories. This begins
to bridge the gap between large-scale internet image collections and object
images captured in everyday life environments