2,153 research outputs found

    VISCOUNTH: A Large-Scale Multilingual Visual Question Answering Dataset for Cultural Heritage

    Get PDF
    Visual question answering has recently been settled as a fundamental multi-modal reasoning task of artificial intelligence that allows users to get information about visual content by asking questions in natural language. In the cultural heritage domain this task can contribute to assist visitors in museums and cultural sites, thus increasing engagement. However, the development of visual question answering models for cultural heritage is prevented by the lack of suitable large-scale datasets. To meet this demand, we built a large-scale heterogeneous and multilingual (Italian and English) dataset for cultural heritage that comprises approximately 500K Italian cultural assets and 6.5M question-answer pairs. We propose a novel formulation of the task that requires reasoning over both the visual content and an associated natural language description, and present baselines for this task. Results show that the current state of the art is reasonably effective, but still far from satisfactory, therefore further research is this area is recommended. Nonetheless, we also present a holistic baseline to address visual and contextual questions and foster future research on the topic

    Visual Question Answering for Cultural Heritage

    Get PDF
    Technology and the fruition of cultural heritage are becoming increasingly more entwined, especially with the advent of smart audio guides, virtual and augmented reality, and interactive installations. Machine learning and computer vision are important components of this ongoing integration, enabling new interaction modalities between user and museum. Nonetheless, the most frequent way of interacting with paintings and statues still remains taking pictures. Yet images alone can only convey the aesthetics of the artwork, lacking is information which is often required to fully understand and appreciate it. Usually this additional knowledge comes both from the artwork itself (and therefore the image depicting it) and from an external source of knowledge, such as an information sheet. While the former can be inferred by computer vision algorithms, the latter needs more structured data to pair visual content with relevant information. Regardless of its source, this information still must be be effectively transmitted to the user. A popular emerging trend in computer vision is Visual Question Answering (VQA), in which users can interact with a neural network by posing questions in natural language and receiving answers about the visual content. We believe that this will be the evolution of smart audio guides for museum visits and simple image browsing on personal smartphones. This will turn the classic audio guide into a smart personal instructor with which the visitor can interact by asking for explanations focused on specific interests. The advantages are twofold: on the one hand the cognitive burden of the visitor will decrease, limiting the flow of information to what the user actually wants to hear; and on the other hand it proposes the most natural way of interacting with a guide, favoring engagement.Comment: accepted at FlorenceHeritech 202

    Visual Question Answering for Cultural Heritage

    Get PDF
    International Conference Florence Heri-Tech is a conference about the technology applied to cultural heritage. This conference involves different areas and topics like engineering, material science, digital heritage..

    CICHMKG: a large-scale and comprehensive Chinese intangible cultural heritage multimodal knowledge graph

    Get PDF
    Intangible Cultural Heritage (ICH) witnesses human creativity and wisdom in long histories, composed of a variety of immaterial manifestations. The rapid development of digital technologies accelerates the record of ICH, generating a sheer number of heterogenous data but in a state of fragmentation. To resolve that, existing studies mainly adopt approaches of knowledge graphs (KGs) which can provide rich knowledge representation. However, most KGs are text-based and text-derived, and incapable to give related images and empower downstream multimodal tasks, which is also unbeneficial for the public to establish the visual perception and comprehend ICH completely especially when they do not have the related ICH knowledge. Hence, aimed at that, we propose to, taking the Chinese nation-level ICH list as an example, construct a large-scale and comprehensive Multimodal Knowledge Graph (CICHMKG) combining text and image entities from multiple data sources and give a practical construction framework. Additionally, in this paper, to select representative images for ICH entities, we propose a method composed of the denoising algorithm (CNIFA) and a series of criteria, utilizing global and local visual features of images and textual features of captions. Extensive empirical experiments demonstrate its effectiveness. Lastly, we construct the CICHMKG, consisting of 1,774,005 triples, and visualize it to facilitate the interactions and help the public dive into ICH deeply

    Investigating user experience and bias mitigation of the multi-modal retrieval of historical data

    Get PDF
    Decolonisation has raised the discussion of technology having the responsibility of presenting multiple perspectives to users. This is specifically relevant to African precolonial heritage artefact data, where the data contains the bias of the curators of the artefacts and there are primary concerns surrounding the social responsibility of these systems. Historians have argued that common information retrieval algorithms may further bias results presented to users. While research for mitigating bias in information retrieval is steered in the direction of artificial intelligence and automation, an often-neglected approach is that of user-control. User-control has proven to be beneficial in other research areas and is strongly aligned with the core principles of decolonisation. Thus, the effects on user experience, bias mitigation, and retrieval effectiveness from the addition of user-control and algorithmic variation to a multimodal information retrieval system containing precolonial African heritage data was investigated in this study. This was done by conducting two experiments: 1) an experiment to provide a baseline offline evaluation of various algorithms for text and image retrieval and 2) an experiment to investigate the user experience with a retrieval system that allowed them to compare algorithms. In the first experiment, the differences in retrieval effectiveness between colour-based pre-processing algorithms, shape-based preprocessing algorithms, and pre-processing algorithms based on a combination of colour- and shape-detection, was explored. The differences in retrieval effectiveness between stemming, stopword removal and synonym query expansion was also evaluated for text retrieval. In the second experiment, the manner in which users experience bias in the context of common information retrieval algorithms for both the textual and image data that are available in typical historical archives was explored. Users were presented with the results generated by multiple algorithmic variations, in a variety of different result formats, and using a variety of different search methods, affording them the opportunity to decide what they deem provides them with a more relevant set of results. The results of the study show that algorithmic variation can lead to significantly improved retrieval performance with respect to image-based retrieval. The results also show that users potentially prefer shape-based image algorithms rather than colour-based image algorithms, and, that shape-based image algorithms can lead to significantly improved retrieval of historical data. The results also show that users have justifiable preferences for multimodal query and result formats to improve user experience and that users believe they can control bias using algorithmic variatio

    Diffusion Based Augmentation for Captioning and Retrieval in Cultural Heritage

    Full text link
    Cultural heritage applications and advanced machine learning models are creating a fruitful synergy to provide effective and accessible ways of interacting with artworks. Smart audio-guides, personalized art-related content and gamification approaches are just a few examples of how technology can be exploited to provide additional value to artists or exhibitions. Nonetheless, from a machine learning point of view, the amount of available artistic data is often not enough to train effective models. Off-the-shelf computer vision modules can still be exploited to some extent, yet a severe domain shift is present between art images and standard natural image datasets used to train such models. As a result, this can lead to degraded performance. This paper introduces a novel approach to address the challenges of limited annotated data and domain shifts in the cultural heritage domain. By leveraging generative vision-language models, we augment art datasets by generating diverse variations of artworks conditioned on their captions. This augmentation strategy enhances dataset diversity, bridging the gap between natural images and artworks, and improving the alignment of visual cues with knowledge from general-purpose datasets. The generated variations assist in training vision and language models with a deeper understanding of artistic characteristics and that are able to generate better captions with appropriate jargon.Comment: Accepted at ICCV 2023 4th Workshop on e-Heritag
    corecore