5 research outputs found

    A Collaborative Color Laboratory: Using 3D Modelling, Texturization, and AR to Challenge White Supremacist Uses of Ancient Classical Sculptures

    Get PDF
    Polychromy in ancient classical sculptures is a historical fact. However, for centuries, archeologists and museum curators have scrubbed away traces of color before their public display. This omission has led to the incorrect idea of a Greco-Roman predilection for pure whiteness—and to the equation of white marble with beauty—with a tendency toward chromophobia, that may even verge into a system of chromoeugenics (Calvo-Quirós, 2013). Currently, white supremacist groups are using the purported aesthetics of classical white refinement for propaganda. The consequences of this use run deep, and an international rise in neo-fascism, entangled with a fear of difference, requires a re-examination of cultural heritage’s connection to identity formation. In line with the idea that physical engagement and supporting the social setting are principles that interaction designers should consider (Petrelli et al., 2016), interactive technologies afford new opportunities to curve classical sculpture’s misuse. This paper discusses the power of color in ancient sculptural polychromy and new models of civic education that tap into the power of new technological paradigms. The work investigates lessons afforded by the humanities on the meaning and power of interpretative processes of cultural artifacts such as the view of objects as social and affective-inducing beings, and then presents ColorColab, a potential critical thinking tool, consisting of an online app and an Augmented Reality (AR) device. The tool would allow users to look at ancient classical sculptures in their original or imagined colors, and would function as a tool for museums, teachers, and public officials interested in using technology for historical education about past and modern diversity through informal education. Initial explorations about the technical development of such a tool are presented, and further directions are discussed

    Seeing the Intangible: Surveying Automatic High-Level Visual Understanding from Still Images

    Full text link
    The field of Computer Vision (CV) was born with the single grand goal of complete image understanding: providing a complete semantic interpretation of an input image. What exactly this goal entails is not immediately straightforward, but theoretical hierarchies of visual understanding point towards a top level of full semantics, within which sits the most complex and subjective information humans can detect from visual data. In particular, non-concrete concepts including emotions, social values and ideologies seem to be protagonists of this "high-level" visual semantic understanding. While such "abstract concepts" are critical tools for image management and retrieval, their automatic recognition is still a challenge, exactly because they rest at the top of the "semantic pyramid": the well-known semantic gap problem is worsened given their lack of unique perceptual referents, and their reliance on more unspecific features than concrete concepts. Given that there seems to be very scarce explicit work within CV on the task of abstract social concept (ASC) detection, and that many recent works seem to discuss similar non-concrete entities by using different terminology, in this survey we provide a systematic review of CV work that explicitly or implicitly approaches the problem of abstract (specifically social) concept detection from still images. Specifically, this survey performs and provides: (1) A study and clustering of high level visual understanding semantic elements from a multidisciplinary perspective (computer science, visual studies, and cognitive perspectives); (2) A study and clustering of high level visual understanding computer vision tasks dealing with the identified semantic elements, so as to identify current CV work that implicitly deals with AC detection

    Semantic Integration of MIR Datasets with the Polifonia Ontology Network

    Get PDF
    Integration between different data formats, and between data belonging to different collections, is an ongoing challenge in the MIR field. Semantic Web tools have proved to be promising resources for making different types of music information interoperable. However, the use of these technologies has so far been limited and scattered in the field. To address this, the Polifonia project is developing an ontological ecosystem that can cover a wide variety of musical aspects (musical features, instruments, emotions, performances). In this paper, we present the Polifonia Ontology Network, an ecosystem that enables and fosters the transition towards semantic MIR

    Automated multimodal sensemaking: Ontology-based integration of linguistic frames and visual data

    No full text
    Frame evocation from visual data is an essential process for multimodal sensemaking, due to the multimodal abstraction provided by frame semantics. However, there is a scarcity of data-driven approaches and tools to automate it. We propose a novel approach for explainable automated multimodal sensemaking by linking linguistic frames to their physical visual occurrences, using ontology-based knowledge engineering techniques. We pair the evocation of linguistic frames from text to visual data as “framal visual manifestations”. We present a deep ontological analysis of the implicit data model of the Visual Genome image dataset, and its formalization in the novel Visual Sense Ontology (VSO). To enhance the multimodal data from this dataset, we introduce a framal knowledge expansion pipeline that extracts and connects linguistic frames – including values and emotions – to images, using multiple linguistic resources for disambiguation. It then introduces the Visual Sense Knowledge Graph (VSKG), a novel resource. VSKG is a queryable knowledge graph that enhances the accessibility and comprehensibility of Visual Genome's multimodal data, based on SPARQL queries. VSKG includes frame visual evocation data, enabling more advanced forms of explicit reasoning, analysis and sensemaking. Our work represents a significant advancement in the automation of frame evocation and multimodal sense-making, performed in a fully interpretable and transparent way, with potential applications in various fields, including the fields of knowledge representation, computer vision, and natural language processing
    corecore