61,695 research outputs found
Recommended from our members
Using a lightweight multimedia content model for semantic annotation
In this paper we discuss the use of a multimedia content model for automatic extraction of semantic metadata from multimedia content. We developed a modular and extensible framework to model the content feature of multimedia data and also describe the way it can be integrated with other existing vocabularies. The goal of this model is to generate sufficient understanding of media content, its context and its relation to domain knowledge in order to perform multimedia reasoning. We implemented a tool that analyzes and links low-level descriptions to higher-level domain specific semantic concepts by means of statistical learning and clustering analysis. Experimental result shows the approach performs well in visual concept prediction in the image which can be further augmented with other information sources such as context text and or audio source
Deep Cross-Modal Correlation Learning for Audio and Lyrics in Music Retrieval
Deep cross-modal learning has successfully demonstrated excellent performance in cross-modal multimedia retrieval, with the aim of learning joint representations between different data modalities. Unfortunately, little research focuses on cross-modal correlation learning where temporal structures of different data modalities such as audio and lyrics should be taken into account. Stemming from the characteristic of temporal structures of music in nature, we are motivated to learn the deep sequential correlation between audio and lyrics. In this work, we propose a deep cross-modal correlation learning architecture involving two-branch deep neural networks for audio modality and text modality (lyrics). Data in different modalities are converted to the same canonical space where inter modal canonical correlation analysis is utilized as an objective function to calculate the similarity of temporal structures. This is the first study that uses deep architectures for learning the temporal correlation between audio and lyrics. A pre-trained Doc2Vec model followed by fully-connected layers is used to represent lyrics. Two significant contributions are made in the audio branch, as follows: i) We propose an end-to-end network to learn cross-modal correlation between audio and lyrics, where feature extraction and correlation learning are simultaneously performed and joint representation is learned by considering temporal structures. ii) As for feature extraction, we further represent an audio signal by a short sequence of local summaries (VGG16 features) and apply a recurrent neural network to compute a compact feature that better learns temporal structures of music audio. Experimental results, using audio to retrieve lyrics or using lyrics to retrieve audio, verify the effectiveness of the proposed deep correlation learning architectures in cross-modal music retrieval
Research in information managment at Dublin City University
The Information Management Group at Dublin City University has research themes such as digital multimedia, interoperable systems and database engineering. In the area of digital multimedia, a collaboration with our School of Electronic Engineering has formed the Centre for Digital Video Processing, a university designated research centre whose aim is to research, develop and evaluate content-based operations on digital video information. To achieve this goal, the range of expertise in this centre covers the complete gamut from image analysis and feature extraction through to video search engine technology and interfaces to video browsing. The Interoperable Systems Group has research interests in federated databases and interoperability, object modelling and database engineering. This report describes the research activities of the major groupings within the Information Management community in Dublin City
University
Video Data Visualization System: Semantic Classification And Personalization
We present in this paper an intelligent video data visualization tool, based
on semantic classification, for retrieving and exploring a large scale corpus
of videos. Our work is based on semantic classification resulting from semantic
analysis of video. The obtained classes will be projected in the visualization
space. The graph is represented by nodes and edges, the nodes are the keyframes
of video documents and the edges are the relation between documents and the
classes of documents. Finally, we construct the user's profile, based on the
interaction with the system, to render the system more adequate to its
references.Comment: graphic
- …