Recently, AIGC image quality assessment (AIGCIQA), which aims to assess the
quality of AI-generated images (AIGIs) from a human perception perspective, has
emerged as a new topic in computer vision. Unlike common image quality
assessment tasks where images are derived from original ones distorted by
noise, blur, and compression, \textit{etc.}, in AIGCIQA tasks, images are
typically generated by generative models using text prompts. Considerable
efforts have been made in the past years to advance AIGCIQA. However, most
existing AIGCIQA methods regress predicted scores directly from individual
generated images, overlooking the information contained in the text prompts of
these images. This oversight partially limits the performance of these AIGCIQA
methods. To address this issue, we propose a text-image encoder-based
regression (TIER) framework. Specifically, we process the generated images and
their corresponding text prompts as inputs, utilizing a text encoder and an
image encoder to extract features from these text prompts and generated images,
respectively. To demonstrate the effectiveness of our proposed TIER method, we
conduct extensive experiments on several mainstream AIGCIQA databases,
including AGIQA-1K, AGIQA-3K, and AIGCIQA2023. The experimental results
indicate that our proposed TIER method generally demonstrates superior
performance compared to baseline in most cases.Comment: 12 pages, 8 figures. arXiv admin note: text overlap with
arXiv:2312.0589