1,270 research outputs found
Retrieval and Annotation of Music Using Latent Semantic Models
PhDThis thesis investigates the use of latent semantic models for annotation and
retrieval from collections of musical audio tracks. In particular latent semantic
analysis (LSA) and aspect models (or probabilistic latent semantic analysis,
pLSA) are used to index words in descriptions of music drawn from hundreds
of thousands of social tags. A new discrete audio feature representation is introduced
to encode musical characteristics of automatically-identified regions
of interest within each track, using a vocabulary of audio muswords. Finally a
joint aspect model is developed that can learn from both tagged and untagged
tracks by indexing both conventional words and muswords. This model is
used as the basis of a music search system that supports query by example and
by keyword, and of a simple probabilistic machine annotation system. The
models are evaluated by their performance in a variety of realistic retrieval
and annotation tasks, motivated by applications including playlist generation,
internet radio streaming, music recommendation and catalogue searchEngineering and Physical Sciences
Research Counci
A Survey of AI Music Generation Tools and Models
In this work, we provide a comprehensive survey of AI music generation tools,
including both research projects and commercialized applications. To conduct
our analysis, we classified music generation approaches into three categories:
parameter-based, text-based, and visual-based classes. Our survey highlights
the diverse possibilities and functional features of these tools, which cater
to a wide range of users, from regular listeners to professional musicians. We
observed that each tool has its own set of advantages and limitations. As a
result, we have compiled a comprehensive list of these factors that should be
considered during the tool selection process. Moreover, our survey offers
critical insights into the underlying mechanisms and challenges of AI music
generation
Music classification by low-rank semantic mappings
A challenging open question in music classification is which music representation (i.e., audio features) and which machine learning algorithm is appropriate for a specific music classification task. To address this challenge, given a number of audio feature vectors for each training music recording that capture the different aspects of music (i.e., timbre, harmony, etc.), the goal is to find a set of linear mappings from several feature spaces to the semantic space spanned by the class indicator vectors. These mappings should reveal the common latent variables, which characterize a given set of classes and simultaneously define a multi-class linear classifier that classifies the extracted latent common features. Such a set of mappings is obtained, building on the notion of the maximum margin matrix factorization, by minimizing a weighted sum of nuclear norms. Since the nuclear norm imposes rank constraints to the learnt mappings, the proposed method is referred to as low-rank semantic mappings (LRSMs). The performance of the LRSMs in music genre, mood, and multi-label classification is assessed by conducting extensive experiments on seven manually annotated benchmark datasets. The reported experimental results demonstrate the superiority of the LRSMs over the classifiers that are compared to. Furthermore, the best reported classification results are comparable with or slightly superior to those obtained by the state-of-the-art task-specific music classification methods
- …