8 research outputs found

    An analysis of the transfer learning of convolutional neural networks for artistic images

    Full text link
    Transfer learning from huge natural image datasets, fine-tuning of deep neural networks and the use of the corresponding pre-trained networks have become de facto the core of art analysis applications. Nevertheless, the effects of transfer learning are still poorly understood. In this paper, we first use techniques for visualizing the network internal representations in order to provide clues to the understanding of what the network has learned on artistic images. Then, we provide a quantitative analysis of the changes introduced by the learning process thanks to metrics in both the feature and parameter spaces, as well as metrics computed on the set of maximal activation images. These analyses are performed on several variations of the transfer learning procedure. In particular, we observed that the network could specialize some pre-trained filters to the new image modality and also that higher layers tend to concentrate classes. Finally, we have shown that a double fine-tuning involving a medium-size artistic dataset can improve the classification on smaller datasets, even when the task changes.Comment: Accepted at Workshop on Fine Art Pattern Extraction and Recognition (FAPER), ICPR, 202

    SniffyArt: The Dataset of Smelling Persons

    Full text link
    Smell gestures play a crucial role in the investigation of past smells in the visual arts yet their automated recognition poses significant challenges. This paper introduces the SniffyArt dataset, consisting of 1941 individuals represented in 441 historical artworks. Each person is annotated with a tightly fitting bounding box, 17 pose keypoints, and a gesture label. By integrating these annotations, the dataset enables the development of hybrid classification approaches for smell gesture recognition. The datasets high-quality human pose estimation keypoints are achieved through the merging of five separate sets of keypoint annotations per person. The paper also presents a baseline analysis, evaluating the performance of representative algorithms for detection, keypoint estimation, and classification tasks, showcasing the potential of combining keypoint estimation with smell gesture classification. The SniffyArt dataset lays a solid foundation for future research and the exploration of multi-task approaches leveraging pose keypoints and person boxes to advance human gesture and olfactory dimension analysis in historical artworks.Comment: 10 pages, 8 figure

    A Data Set and a Convolutional Model for Iconography Classification in Paintings

    Full text link
    Iconography in art is the discipline that studies the visual content of artworks to determine their motifs and themes andto characterize the way these are represented. It is a subject of active research for a variety of purposes, including the interpretation of meaning, the investigation of the origin and diffusion in time and space of representations, and the study of influences across artists and art works. With the proliferation of digital archives of art images, the possibility arises of applying Computer Vision techniques to the analysis of art images at an unprecedented scale, which may support iconography research and education. In this paper we introduce a novel paintings data set for iconography classification and present the quantitativeand qualitative results of applying a Convolutional Neural Network (CNN) classifier to the recognition of the iconography of artworks. The proposed classifier achieves good performances (71.17% Precision, 70.89% Recall, 70.25% F1-Score and 72.73% Average Precision) in the task of identifying saints in Christian religious paintings, a task made difficult by the presence of classes with very similar visual features. Qualitative analysis of the results shows that the CNN focuses on the traditional iconic motifs that characterize the representation of each saint and exploits such hints to attain correct identification. The ultimate goal of our work is to enable the automatic extraction, decomposition, and comparison of iconography elements to support iconographic studies and automatic art work annotation.Comment: Published at ACM Journal on Computing and Cultural Heritage (JOCCH) https://doi.org/10.1145/345888

    INSPECCIÓN DE AISLADORES EN LÍNEAS DE TRANSMISIÓN ELÉCTRICA USANDO INTELIGENCIA ARTIFICIAL

    Get PDF
    Uno de los procesos más importantes en la inspección de líneas de transmisión eléctrica es la detección de fallas en aisladores eléctricos. El defecto más común encontrado en los aisladores eléctricos es el quiebre de discos dentro de la cadena de aisladores. El uso de métodos tradicionales de segmentación por binarización indican una pobre capacidad para detectar un aislador si hay muchos cambios en el medio en el que se encuentra. Un algoritmo de inteligencia artificial conocido como You Only Look Once (YOLO) se usa para detectar y localizar los aisladores eléctricos a partir de imágenes de torres eléctricas de alta tensión. Posteriormente a la localización de los aisladores eléctricos, se realiza un escalado al doble del tamaño de la imagen original del aislador eléctrico usando un interpolador cúbico. De tal forma que le permita al supervisor de las líneas eléctricas de alta tensión realizar una correcta visualización de los aisladores a inspeccionar. La arquitectura de redes neuronales convolucionales MobileNet empleando el algoritmo YOLO, presentó resultados superiores en precisión y velocidad de ejecución con respecto a las arquitecturas Full YOLO e InceptionV3

    Deep Transfer Learning for Art Classification Problems

    Full text link
    peer reviewedIn this paper we investigate whether Deep Convolutional Neural Net- works (DCNNs), which have obtained state of the art results on the ImageNet challenge, are able to perform equally well on three different art classification problems. In particular, we assess whether it is beneficial to fine tune the net- works instead of just using them as off the shelf feature extractors for a sepa- rately trained softmax classifier. Our experiments show how the first approach yields significantly better results and allows the DCNNs to develop new selective attention mechanisms over the images, which provide powerful insights about which pixel regions allow the networks successfully tackle the proposed classi- fication challenges. Furthermore, we also show how DCNNs, which have been fine tuned on a large artistic collection, outperform the same architectures which are pre-trained on the ImageNet dataset only, when it comes to the classification of heritage objects from a different dataset.INSIGHT: Intelligent neural systems as integrated heritage tool
    corecore