1,930 research outputs found

    Deep Cross-Modal Correlation Learning for Audio and Lyrics in Music Retrieval

    Get PDF
    Deep cross-modal learning has successfully demonstrated excellent performance in cross-modal multimedia retrieval, with the aim of learning joint representations between different data modalities. Unfortunately, little research focuses on cross-modal correlation learning where temporal structures of different data modalities such as audio and lyrics should be taken into account. Stemming from the characteristic of temporal structures of music in nature, we are motivated to learn the deep sequential correlation between audio and lyrics. In this work, we propose a deep cross-modal correlation learning architecture involving two-branch deep neural networks for audio modality and text modality (lyrics). Data in different modalities are converted to the same canonical space where inter modal canonical correlation analysis is utilized as an objective function to calculate the similarity of temporal structures. This is the first study that uses deep architectures for learning the temporal correlation between audio and lyrics. A pre-trained Doc2Vec model followed by fully-connected layers is used to represent lyrics. Two significant contributions are made in the audio branch, as follows: i) We propose an end-to-end network to learn cross-modal correlation between audio and lyrics, where feature extraction and correlation learning are simultaneously performed and joint representation is learned by considering temporal structures. ii) As for feature extraction, we further represent an audio signal by a short sequence of local summaries (VGG16 features) and apply a recurrent neural network to compute a compact feature that better learns temporal structures of music audio. Experimental results, using audio to retrieve lyrics or using lyrics to retrieve audio, verify the effectiveness of the proposed deep correlation learning architectures in cross-modal music retrieval

    Family memories in the home: contrasting physical and digital mementos

    Get PDF
    We carried out fieldwork to characterise and compare physical and digital mementos in the home. Physical mementos are highly valued, heterogeneous and support different types of recollection. Contrary to expectations, we found physical mementos are not purely representational, and can involve appropriating common objects and more idiosyncratic forms. In contrast, digital mementos were initially perceived as less valuable, although participants later reconsidered this. Digital mementos were somewhat limited in function and expression, largely involving representational photos and videos, and infrequently accessed. We explain these digital limitations and conclude with design guidelines for digital mementos, including better techniques for accessing and integrating these into everyday life, allowing them to acquire the symbolic associations and lasting value that characterise their physical counterparts

    Audio-Visual Embedding for Cross-Modal MusicVideo Retrieval through Supervised Deep CCA

    Full text link
    Deep learning has successfully shown excellent performance in learning joint representations between different data modalities. Unfortunately, little research focuses on cross-modal correlation learning where temporal structures of different data modalities, such as audio and video, should be taken into account. Music video retrieval by given musical audio is a natural way to search and interact with music contents. In this work, we study cross-modal music video retrieval in terms of emotion similarity. Particularly, audio of an arbitrary length is used to retrieve a longer or full-length music video. To this end, we propose a novel audio-visual embedding algorithm by Supervised Deep CanonicalCorrelation Analysis (S-DCCA) that projects audio and video into a shared space to bridge the semantic gap between audio and video. This also preserves the similarity between audio and visual contents from different videos with the same class label and the temporal structure. The contribution of our approach is mainly manifested in the two aspects: i) We propose to select top k audio chunks by attention-based Long Short-Term Memory (LSTM)model, which can represent good audio summarization with local properties. ii) We propose an end-to-end deep model for cross-modal audio-visual learning where S-DCCA is trained to learn the semantic correlation between audio and visual modalities. Due to the lack of music video dataset, we construct 10K music video dataset from YouTube 8M dataset. Some promising results such as MAP and precision-recall show that our proposed model can be applied to music video retrieval.Comment: 8 pages, 9 figures. Accepted by ISM 201

    Perceptually relevant browsing environments for large texture databases

    Get PDF
    This thesis describes the development of a large database of texture stimuli, the production of a similarity matrix re ecting human judgements of similarity about the database, and the development of three browsing models that exploit structure in the perceptual information for navigation. Rigorous psychophysical comparison experiments are carried out and the SOM (Self Organising Map) found to be the fastest of the three browsing models under examination. We investigate scalable methods of augmenting a similarity matrix using the SOM browsing environment to introduce previously unknown textures. Further psychophysical experiments reveal our method produces a data organisation that is as fast to navigate as that derived from the perceptual grouping experiments.Engineering and Physical Sciences Research Council (EPSRC

    Diagrammatic Reasoning and Modelling in the Imagination: The Secret Weapons of the Scientific Revolution

    Get PDF
    Just before the Scientific Revolution, there was a "Mathematical Revolution", heavily based on geometrical and machine diagrams. The "faculty of imagination" (now called scientific visualization) was developed to allow 3D understanding of planetary motion, human anatomy and the workings of machines. 1543 saw the publication of the heavily geometrical work of Copernicus and Vesalius, as well as the first Italian translation of Euclid

    Unified Pretraining Target Based Video-music Retrieval With Music Rhythm And Video Optical Flow Information

    Full text link
    Background music (BGM) can enhance the video's emotion. However, selecting an appropriate BGM often requires domain knowledge. This has led to the development of video-music retrieval techniques. Most existing approaches utilize pretrained video/music feature extractors trained with different target sets to obtain average video/music-level embeddings. The drawbacks are two-fold. One is that different target sets for video/music pretraining may cause the generated embeddings difficult to match. The second is that the underlying temporal correlation between video and music is ignored. In this paper, our proposed approach leverages a unified target set to perform video/music pretraining and produces clip-level embeddings to preserve temporal information. The downstream cross-modal matching is based on the clip-level features with embedded music rhythm and optical flow information. Experiments demonstrate that our proposed method can achieve superior performance over the state-of-the-art methods by a significant margin

    Supporting Seeking Tasks within Spoken Word Audio Collections

    Get PDF
    • …
    corecore