VISCOUNTH: A Large-Scale Multilingual Visual Question Answering Dataset for Cultural Heritage

Abstract

Visual question answering has recently been settled as a fundamental multi-modal reasoning task of artificial intelligence that allows users to get information about visual content by asking questions in natural language. In the cultural heritage domain this task can contribute to assist visitors in museums and cultural sites, thus increasing engagement. However, the development of visual question answering models for cultural heritage is prevented by the lack of suitable large-scale datasets. To meet this demand, we built a large-scale heterogeneous and multilingual (Italian and English) dataset for cultural heritage that comprises approximately 500K Italian cultural assets and 6.5M question-answer pairs. We propose a novel formulation of the task that requires reasoning over both the visual content and an associated natural language description, and present baselines for this task. Results show that the current state of the art is reasonably effective, but still far from satisfactory, therefore further research is this area is recommended. Nonetheless, we also present a holistic baseline to address visual and contextual questions and foster future research on the topic

    Similar works