Rapid progress in text-to-image generative models coupled with their
deployment for visual content creation has magnified the importance of
thoroughly evaluating their performance and identifying potential biases. In
pursuit of models that generate images that are realistic, diverse, visually
appealing, and consistent with the given prompt, researchers and practitioners
often turn to automated metrics to facilitate scalable and cost-effective
performance profiling. However, commonly-used metrics often fail to account for
the full diversity of human preference; often even in-depth human evaluations
face challenges with subjectivity, especially as interpretations of evaluation
criteria vary across regions and cultures. In this work, we conduct a large,
cross-cultural study to study how much annotators in Africa, Europe, and
Southeast Asia vary in their perception of geographic representation, visual
appeal, and consistency in real and generated images from state-of-the art
public APIs. We collect over 65,000 image annotations and 20 survey responses.
We contrast human annotations with common automated metrics, finding that human
preferences vary notably across geographic location and that current metrics do
not fully account for this diversity. For example, annotators in different
locations often disagree on whether exaggerated, stereotypical depictions of a
region are considered geographically representative. In addition, the utility
of automatic evaluations is dependent on assumptions about their set-up, such
as the alignment of feature extractors with human perception of object
similarity or the definition of "appeal" captured in reference datasets used to
ground evaluations. We recommend steps for improved automatic and human
evaluations