1,518 research outputs found

    Composite Correlation Quantization for Efficient Multimodal Retrieval

    Full text link
    Efficient similarity retrieval from large-scale multimodal database is pervasive in modern search engines and social networks. To support queries across content modalities, the system should enable cross-modal correlation and computation-efficient indexing. While hashing methods have shown great potential in achieving this goal, current attempts generally fail to learn isomorphic hash codes in a seamless scheme, that is, they embed multiple modalities in a continuous isomorphic space and separately threshold embeddings into binary codes, which incurs substantial loss of retrieval accuracy. In this paper, we approach seamless multimodal hashing by proposing a novel Composite Correlation Quantization (CCQ) model. Specifically, CCQ jointly finds correlation-maximal mappings that transform different modalities into isomorphic latent space, and learns composite quantizers that convert the isomorphic latent features into compact binary codes. An optimization framework is devised to preserve both intra-modal similarity and inter-modal correlation through minimizing both reconstruction and quantization errors, which can be trained from both paired and partially paired data in linear time. A comprehensive set of experiments clearly show the superior effectiveness and efficiency of CCQ against the state of the art hashing methods for both unimodal and cross-modal retrieval

    Knowledge-rich Image Gist Understanding Beyond Literal Meaning

    Full text link
    We investigate the problem of understanding the message (gist) conveyed by images and their captions as found, for instance, on websites or news articles. To this end, we propose a methodology to capture the meaning of image-caption pairs on the basis of large amounts of machine-readable knowledge that has previously been shown to be highly effective for text understanding. Our method identifies the connotation of objects beyond their denotation: where most approaches to image understanding focus on the denotation of objects, i.e., their literal meaning, our work addresses the identification of connotations, i.e., iconic meanings of objects, to understand the message of images. We view image understanding as the task of representing an image-caption pair on the basis of a wide-coverage vocabulary of concepts such as the one provided by Wikipedia, and cast gist detection as a concept-ranking problem with image-caption pairs as queries. To enable a thorough investigation of the problem of gist understanding, we produce a gold standard of over 300 image-caption pairs and over 8,000 gist annotations covering a wide variety of topics at different levels of abstraction. We use this dataset to experimentally benchmark the contribution of signals from heterogeneous sources, namely image and text. The best result with a Mean Average Precision (MAP) of 0.69 indicate that by combining both dimensions we are able to better understand the meaning of our image-caption pairs than when using language or vision information alone. We test the robustness of our gist detection approach when receiving automatically generated input, i.e., using automatically generated image tags or generated captions, and prove the feasibility of an end-to-end automated process

    Switching Partners: Dancing with the Ontological Engineers

    Get PDF
    Ontologies are today being applied in almost every field to support the alignment and retrieval of data of distributed provenance. Here we focus on new ontological work on dance and on related cultural phenomena belonging to what UNESCO calls the “intangible heritage.” Currently data and information about dance, including video data, are stored in an uncontrolled variety of ad hoc ways. This serves not only to prevent retrieval, comparison and analysis of the data, but may also impinge on our ability to preserve the data that already exists. Here we explore recent technological developments that are designed to counteract such problems by allowing information to be retrieved across disciplinary, cultural, linguistic and technological boundaries. Software applications such as the ones envisaged here will enable speedier recovery of data and facilitate its analysis in ways that will assist both archiving of and research on dance

    Deep Architectures for Visual Recognition and Description

    Get PDF
    In recent times, digital media contents are inherently of multimedia type, consisting of the form text, audio, image and video. Several of the outstanding computer Vision (CV) problems are being successfully solved with the help of modern Machine Learning (ML) techniques. Plenty of research work has already been carried out in the field of Automatic Image Annotation (AIA), Image Captioning and Video Tagging. Video Captioning, i.e., automatic description generation from digital video, however, is a different and complex problem altogether. This study compares various existing video captioning approaches available today and attempts their classification and analysis based on different parameters, viz., type of captioning methods (generation/retrieval), type of learning models employed, the desired output description length generated, etc. This dissertation also attempts to critically analyze the existing benchmark datasets used in various video captioning models and the evaluation metrics for assessing the final quality of the resultant video descriptions generated. A detailed study of important existing models, highlighting their comparative advantages as well as disadvantages are also included. In this study a novel approach for video captioning on the Microsoft Video Description (MSVD) dataset and Microsoft Video-to-Text (MSR-VTT) dataset is proposed using supervised learning techniques to train a deep combinational framework, for achieving better quality video captioning via predicting semantic tags. We develop simple shallow CNN (2D and 3D) as feature extractors, Deep Neural Networks (DNNs and Bidirectional LSTMs (BiLSTMs) as tag prediction models and Recurrent Neural Networks (RNNs) (LSTM) model as the language model. The aim of the work was to provide an alternative narrative to generating captions from videos via semantic tag predictions and deploy simpler shallower deep model architectures with lower memory requirements as solution so that it is not very memory extensive and the developed models prove to be stable and viable options when the scale of the data is increased. This study also successfully employed deep architectures like the Convolutional Neural Network (CNN) for speeding up automation process of hand gesture recognition and classification of the sign languages of the Indian classical dance form, ‘Bharatnatyam’. This hand gesture classification is primarily aimed at 1) building a novel dataset of 2D single hand gestures belonging to 27 classes that were collected from (i) Google search engine (Google images), (ii) YouTube videos (dynamic and with background considered) and (iii) professional artists under staged environment constraints (plain backgrounds). 2) exploring the effectiveness of CNNs for identifying and classifying the single hand gestures by optimizing the hyperparameters, and 3) evaluating the impacts of transfer learning and double transfer learning, which is a novel concept explored for achieving higher classification accuracy

    Audiovisual Media Annotation Using Qualitative Data Analysis Software: A Comparative Analysis

    Get PDF
    The variety of specialized tools designed to facilitate analysis of audio-visual (AV) media are useful not only to media scholars and oral historians but to other researchers as well. Both Qualitative Data Analysis Software (QDAS) packages and dedicated systems created for specific disciplines, such as linguistics, can be used for this purpose. Software proliferation challenges researchers to make informed choices about which package will be most useful for their project. This paper aims to present an information science perspective of the scholarly use of tools in qualitative research of audio-visual sources. It provides a baseline of affordances based on functionalities with the goal of making the types of research tasks that they support more explicit (e.g., transcribing, segmenting, coding, linking, and commenting on data). We look closely at how these functionalities relate to each other, and at how system design influences research tasks

    Notes on the Music: A social data infrastructure for music annotation

    Get PDF
    Beside transmitting musical meaning from composer to reader, symbolic music notation affords the dynamic addition of layers of information by annotation. This allows music scores to serve as rudimentary communication frameworks. Music encodings bring these affordances into the digital realm; though annotations may be represented as digital pen-strokes upon a score image, they must be captured using machine-interpretable semantics to fully benefit from this transformation. This is challenging, as annota- tors’ requirements are heterogeneous, varying both across different types of user (e.g., musician, scholar) and within these groups, de- pending on the specific use-case. A hypothetical all-encompassing tool catering to every conceivable annotation type, even if it were possible to build, would vastly complicate user interaction. This additional complexity would significantly increase cognitive load and impair usability, particularly in dynamic real-time usage con- texts, e.g., live annotation during music rehearsal or performance. To address this challenge, we present a social data infrastructure that facilitates the creation of use-case specific annotation toolkits. Its components include a selectable-score module that supports customisable click-and-drag selection of score elements (e.g., notes, measures, directives); the Web Annotations data model, extended to support the creation of custom, Web-addressable annotation types supporting the specification and (re-)use of annotation palettes; and the Music Encoding and Linked Data (MELD) Javascript client library, used to build interfaces that map annotation types to render- ing and interaction handlers. We have extended MELD to support the Solid platform for social Linked Data, allowing annotations to be privately stored in user-controlled Personal Online Datastores (Pods), or selectively shared or published. To demonstrate the feasi- bility of our proposed approach, we present annotation interfaces employing the outlined infrastructure in three distinct use-cases: scholarly communication; music rehearsal; and rating during music listening

    Semiotic Annotation of Narrative Video Commercials: Bridging the Gap between Artifacts and Ontologies

    Get PDF
    Drawing on semiotic theories, the paper proposes a new concept of annotation \u2013 called semiotic annotation \u2013 whose goal is to describe the multilayered articulation of meaning inscribed within narrative video commercials by their designers. The approach exploits the use of a meta-model of the narrative video genre providing the conceptualizations and the vocabulary for analysis and annotation. By explicating design knowledge embodied in the video, semiotic annotation plays the role of intermediate level knowledge between the meta-model (an informal ontology) and practice (the concrete video artifact). In order to assess the feasibility of the approach, a test bed is presented and results are reported. A final discussion about the potential contribution of semiotic annotation in the fields of Research Through Design, Technological Mediation, and Interface Criticism concludes the study
    • 

    corecore