31 research outputs found

    Picasso, Matisse, or a Fake? Automated Analysis of Drawings at the Stroke Level for Attribution and Authentication

    Full text link
    This paper proposes a computational approach for analysis of strokes in line drawings by artists. We aim at developing an AI methodology that facilitates attribution of drawings of unknown authors in a way that is not easy to be deceived by forged art. The methodology used is based on quantifying the characteristics of individual strokes in drawings. We propose a novel algorithm for segmenting individual strokes. We designed and compared different hand-crafted and learned features for the task of quantifying stroke characteristics. We also propose and compare different classification methods at the drawing level. We experimented with a dataset of 300 digitized drawings with over 80 thousands strokes. The collection mainly consisted of drawings of Pablo Picasso, Henry Matisse, and Egon Schiele, besides a small number of representative works of other artists. The experiments shows that the proposed methodology can classify individual strokes with accuracy 70%-90%, and aggregate over drawings with accuracy above 80%, while being robust to be deceived by fakes (with accuracy 100% for detecting fakes in most settings)

    Tandem 2.0: Image and Text Data Generation Application

    Full text link
    First created as part of the Digital Humanities Praxis course in the spring of 2012 at the CUNY Graduate Center, Tandem explores the generation of datasets comprised of text and image data by leveraging Optical Character Recognition (OCR), Natural Language Processing (NLP) and Computer Vision (CV). This project builds upon that earlier work in a new programming framework. While other developers and digital humanities scholars have created similar tools specifically geared toward NLP (e.g. Voyant-Tools), as well as algorithms for image processing and feature extraction on the CV side, Tandem explores the process of developing a more robust and user-friendly web-based multimodal data generator using modern development processes with the intention of expanding the use of the tool among interested academics. Tandem functions as a full-stack JavaScript in-browser web application that allows a user to login, upload a corpus of image files for OCR, NLP, and CV based image processing to facilitate data generation. The corpora intended for this tool includes picture books, comics, and other types of image and text based manuscripts and is discussed in detail. Once images are processed, the application provides some key initial insights and data lightly visualized in a dashboard view for the user. As a research question, this project explores the viability of full-stack JavaScript application development for academic end products by looking at a variety of courses and literature that inspired the work alongside the documented process of development of the application and proposed future enhancements for the tool. For those interested in further research or development, the full codebase for this project is available for download

    Immaginazione computazionale e digital art history

    Get PDF
    This essay explores the parallel rising of computer vision technology and digital art history, examining some of the current possibilities and limits of computational techniques applied to the cultural and historical studies of images. A fracture emerges: computer scientists seems to lack in the critical approach typical of the humanities, a shortfall which sometimes condemns their attempts to remain technological curiosities. For their part, humanists lack in technical knowledge that is needed to directly investigate big archives of images, with the result that art historians often must limit their attempts in the computer-aided inquires on texts or metadata databases, a task that does not imply the study of the images themselves. A future dialogue between the two areas is claimed as a necessity in order to foster this new branch of knowledge

    Deep Learning Architect: Classification for Architectural Design through the Eye of Artificial Intelligence

    Full text link
    This paper applies state-of-the-art techniques in deep learning and computer vision to measure visual similarities between architectural designs by different architects. Using a dataset consisting of web scraped images and an original collection of images of architectural works, we first train a deep convolutional neural network (DCNN) model capable of achieving 73% accuracy in classifying works belonging to 34 different architects. Through examining the weights in the trained DCNN model, we are able to quantitatively measure the visual similarities between architects that are implicitly learned by our model. Using this measure, we cluster architects that are identified to be similar and compare our findings to conventional classification made by architectural historians and theorists. Our clustering of architectural designs remarkably corroborates conventional views in architectural history, and the learned architectural features also coheres with the traditional understanding of architectural designs.Comment: 22 pages, 5 figures, 4 table

    Deep convolutional embedding for digitized painting clustering

    Full text link
    Clustering artworks is difficult because of several reasons. On one hand, recognizing meaningful patterns in accordance with domain knowledge and visual perception is extremely hard. On the other hand, the application of traditional clustering and feature reduction techniques to the highly dimensional pixel space can be ineffective. To address these issues, we propose a deep convolutional embedding model for clustering digital paintings, in which the task of mapping the input raw data to an abstract, latent space is optimized jointly with the task of finding a set of cluster centroids in this latent feature space. Quantitative and qualitative experimental results show the effectiveness of the proposed method. The model is also able to outperform other state-of-the-art deep clustering approaches to the same problem. The proposed method may be beneficial to several art-related tasks, particularly visual link retrieval and historical knowledge discovery in painting datasets

    Visual link retrieval and knowledge discovery in painting datasets

    Get PDF
    Visual arts have invaluable importance for the cultural, historic and economic growth of our societies. One of the building blocks of most analysis in visual arts is to find similarities among paintings of different artists and painting schools. To help art historians better understand visual arts, the present paper presents a framework for visual link retrieval and knowledge discovery in digital painting datasets. The proposed framework is based on a deep convolutional neural network to perform feature extraction and on a fully unsupervised nearest neighbor approach to retrieve visual links among digitized paintings. The fully unsupervised strategy makes attractive the proposed method especially in those cases where metadata are either scarce or unavailable or difficult to collect. In addition, the proposed framework includes a graph analysis that makes it possible to study influences among artists, thus providing historical knowledge discovery.Comment: submitted to Multimedia Tools and Application

    ARTIFICIAL INTELLIGENCE: AN ANALYSIS OF ALAN TURING’S ROLE IN THE CONCEPTION AND DEVELOPMENT OF INTELLIGENT MACHINERY

    Get PDF
    The purpose of this thesis is to follow the thread of Alan Turing’s ideas throughout his decades of research and analyze how his predictions have come to fruition over the years. Turing’s Computing Machinery and Intelligence is the paper in which the Turing Test is described as an alternative way to answer the question “can machines think?” (Turing 433). Since the development of Turing’s original paper, there has been a tremendous amount of advancement in the field of artificial intelligence. The field has made its way into art classification as well as the medical industry. The main concept researched in this analysis focuses on whether or not a machine exists that has passed the Turing Test. Should it be determined that a machine has indeed passed this test, it is important to discuss what the ethical implications of this accomplishment entail. Turing’s paper, while raising great controversy regarding its ethical implications, proves to offer significant contribution to the field of artificial intelligence and technology

    A deep learning approach to clustering visual arts

    Full text link
    Clustering artworks is difficult for several reasons. On the one hand, recognizing meaningful patterns based on domain knowledge and visual perception is extremely hard. On the other hand, applying traditional clustering and feature reduction techniques to the highly dimensional pixel space can be ineffective. To address these issues, in this paper we propose DELIUS: a DEep learning approach to cLustering vIsUal artS. The method uses a pre-trained convolutional network to extract features and then feeds these features into a deep embedded clustering model, where the task of mapping the raw input data to a latent space is jointly optimized with the task of finding a set of cluster centroids in this latent space. Quantitative and qualitative experimental results show the effectiveness of the proposed method. DELIUS can be useful for several tasks related to art analysis, in particular visual link retrieval and historical knowledge discovery in painting datasets.Comment: Submitted to IJC

    Novelty and Cultural Evolution in Modern Popular Music

    Full text link
    The ubiquity of digital music consumption has made it possible to extract information about modern music that allows us to perform large scale analysis of stylistic change over time. In order to uncover underlying patterns in cultural evolution, we examine the relationship between the established characteristics of different genres and styles, and the introduction of novel ideas that fuel this ongoing creative evolution. To understand how this dynamic plays out and shapes the cultural ecosystem, we compare musical artifacts to their contemporaries to identify novel artifacts, study the relationship between novelty and commercial success, and connect this to the changes in musical content that we can observe over time. Using Music Information Retrieval (MIR) data and lyrics from Billboard Hot 100 songs between 1974-2013, we calculate a novelty score for each song's aural attributes and lyrics. Comparing both scores to the popularity of the song following its release, we uncover key patterns in the relationship between novelty and audience reception. Additionally, we look at the link between novelty and the likelihood that a song was influential given where its MIR and lyrical features fit within the larger trends we observed
    corecore