10 research outputs found

    SniffyArt: The Dataset of Smelling Persons

    Full text link
    Smell gestures play a crucial role in the investigation of past smells in the visual arts yet their automated recognition poses significant challenges. This paper introduces the SniffyArt dataset, consisting of 1941 individuals represented in 441 historical artworks. Each person is annotated with a tightly fitting bounding box, 17 pose keypoints, and a gesture label. By integrating these annotations, the dataset enables the development of hybrid classification approaches for smell gesture recognition. The datasets high-quality human pose estimation keypoints are achieved through the merging of five separate sets of keypoint annotations per person. The paper also presents a baseline analysis, evaluating the performance of representative algorithms for detection, keypoint estimation, and classification tasks, showcasing the potential of combining keypoint estimation with smell gesture classification. The SniffyArt dataset lays a solid foundation for future research and the exploration of multi-task approaches leveraging pose keypoints and person boxes to advance human gesture and olfactory dimension analysis in historical artworks.Comment: 10 pages, 8 figure

    Multimodal and Multilingual Understanding of Smells using VilBERT and mUNITER

    Get PDF
    We evaluate state-of-the-art multimodal models to detect common olfactory references in multilingual text and images in the scope of the Multimodal Understanding of Smells in Texts and Images (MUSTI) at Mediaeval’22. The goal of the MUSTI Subtask 1 is to classify paired text and images as to whether they refer to the same smell source or not. We approach this task as a Visual Entailment problem and evaluate the performance of the English model ViLBERT and the multilingual model mUNITER on MUSTI Subtask 1. Although base VilBERT and mUNITER models perform worse than a dummy baseline, fine-tuning these models improve performance significantly in almost all scenarios. We find that fine-tuning mUNITER with SNLI-VE and MUSTI train data performs better than other configurations we implemented. Our experiments demonstrate that the task presents some challenges, but it is by no means impossible. Our code is available on https://github. com/Odeuropa/musti-eval-baselines

    MUSTI-Multimodal Understanding of Smells in Texts and Images at MediaEval 2022

    Get PDF
    MUSTI aims to collect information about smell from digital text and image collections from the 17th to 20th century in a multilingual setting. More precisely, MUSTI studies the relatedness of evocation of smells (smell sources being identified, objects being detected, gestures being mentioned or recognized) between texts and images. The main task is a binary classification task and entails identifying whether a pair of image and a text snippet contains the same smell source independent of what is the smell source. An optional sub-task is the determination of the smell sources that make the respective pair related

    Odeuropa Image Analysis Demonstrator Data

    No full text
    This is the data required to run the Odeuropa Image Analysis Demonstrator Noteboo

    Image Demonstator Data

    No full text
    <p>Images, annotations, and meta-data for the Odeuropa image analysis demonstrator</p&gt

    The Object Detection for Olfactory References (ODOR) Dataset

    No full text
    <p><strong>The Object Detection for Olfactory References (ODOR) Dataset</strong></p><p>Real-world applications of computer vision in the humanities require algorithms to be robust against artistic abstraction, peripheral objects, and subtle differences between fine-grained target classes. </p><p>Existing datasets provide instance-level annotations on artworks but are generally biased towards the image centre and limited with regard to detailed object classes. The ODOR dataset fills this gap, offering 38,116 object-level annotations across 4,712 images, spanning an extensive set of 139 fine-grained categories. </p><p>It has challenging dataset properties, such as a detailed set of categories, dense and overlapping objects, and spatial distribution over the whole image canvas. </p><p>Inspiring further research on artwork object detection and broader visual cultural heritage studies, the dataset challenges researchers to explore the intersection of object recognition and smell perception.</p><p><strong>How to use</strong></p><p>To download the dataset images, run the `download_imgs.py` script in the subfolder. The images will be downloaded to the `imgs` folder.</p><p>The annotations are provided in COCO JSON format. To represent the two-level hierarchy of the object classes, we make use of the supercategory field in the categories array as defined by COCO. In addition to the object-level annotations, we provide an additional CSV file with image-level metadata, which includes content-related fields, such as Iconclass codes or image descriptions, as well as formal annotations, such as artist, license, or creation year. For the sake of license compliance, we do not publish the images directly (although most of the images are public domain). Instead, we provide links to their source  collections in the metadata file (meta.csv) and a python script to download the artwork images (download_images.py).</p><p>The mapping between the `images` array of the `annotations.json` and the `metadata.csv` file can be accomplished via the `file_name` attribute of the elements of the `images` array and the unique `File Name` column of the `metadata.csv` file, respectively.</p&gt

    Odeuropa Dataset of Smell-Related Objects

    No full text
    Odeuropa Dataset of Olfactory Objects This dataset is released as part of the Odeuropa project. The annotations are identical to the training set of the ICPR2022-ODOR Challenge. It contains bounding box annotations for smell-active objects in historical artworks gathered from various digital connections. The smell-active objects annotated in the dataset either carry smells themselves or hint at the presence of smells. The dataset provides 15484 bounding boxes on 2116 artworks in 87 object categories. An additional csv file contains further image-level metadata such as artist, collection, or year of creation. How to use Due to licensing issues, we cannot provide the images directly, but instead provide a collection of links and a download script. To get the images, just run the `download_imgs.py` script which loads the images using the links from the `metadata.csv` file. The downloaded images can then be found in the `images` subfolder. The overall size of the downloaded images is c. 200MB. The bounding box annotations can be found in the `annotations.json`. The annotations follow the COCO JSON format, the definition is available here. The mapping between the `images` array of the `annotations.json` and the `metadata.csv` file can be accomplished via the `file_name` attribute of the elements of the `images` array and the unique `File Name` column of the `metadata.csv` file, respectively. Additional image-level metadata is available in the `metadata.csv` file

    Nose-First. Towards an Olfactory Gaze for Digital Art History

    No full text
    What are the historical smells and olfactory narratives of Europe? How can we make use of digital museum collections to trace information on olfactory heritage? In recent years, European cultural heritage institutions have invested heavily in large-scale digitization, which provides us with a wealth of object, text and image data that can be browsed and analysed by humans and machines. However, as heritage institutes, as well as humanities and computer science scholars, have had a long-standing tradition of ocular-centric thinking, it is difficult to find relevant information about smell in digital collections. The historical gaze, for a long time, has been visually biased, leaving smell overlooked within many digital collections. This paper offers a roadmap towards an olfactory gaze for digital cultural heritage collections. The work we present here is part of the Odeuropa project, an action of the Horizon 2020 programme, which promotes research and innovation. It presents a work in progress on olfactory heritage and sensory mining in digital art collections. First, we will describe the current state of the art, showing how olfactory information is traditionally missing or even omitted from digital art collection management systems. We present a baseline research, which maps the gaps and biases in art thesauruses and iconographic classification systems. Next, we will present two connected solutions that we are currently developing in the Odeuropa project: a) a database with olfactory information related to historical artworks, aimed to enrich existing metadata and improve search solutions b) computer vision methodologies for sensory mining. Finally, we pitch a new idea: a nose-first scent wheel. When integrated into current digital collection interfaces, the scent wheel would encourage audiences to develop an olfactory gaze and offer new ways to uncover the rich storylines of olfactory heritage within digital collections
    corecore