For object re-identification (re-ID), learning from synthetic data has become
a promising strategy to cheaply acquire large-scale annotated datasets and
effective models, with few privacy concerns. Many interesting research problems
arise from this strategy, e.g., how to reduce the domain gap between synthetic
source and real-world target. To facilitate developing more new approaches in
learning from synthetic data, we introduce the Alice benchmarks, large-scale
datasets providing benchmarks as well as evaluation protocols to the research
community. Within the Alice benchmarks, two object re-ID tasks are offered:
person and vehicle re-ID. We collected and annotated two challenging real-world
target datasets: AlicePerson and AliceVehicle, captured under various
illuminations, image resolutions, etc. As an important feature of our real
target, the clusterability of its training set is not manually guaranteed to
make it closer to a real domain adaptation test scenario. Correspondingly, we
reuse existing PersonX and VehicleX as synthetic source domains. The primary
goal is to train models from synthetic data that can work effectively in the
real world. In this paper, we detail the settings of Alice benchmarks, provide
an analysis of existing commonly-used domain adaptation methods, and discuss
some interesting future directions. An online server will be set up for the
community to evaluate methods conveniently and fairly.Comment: 9 pages, 4 figures, 4 table