291,496 research outputs found

    Semantic Federation of Musical and Music-Related Information for Establishing a Personal Music Knowledge Base

    Get PDF
    Music is perceived and described very subjectively by every individual. Nowadays, people often get lost in their steadily growing, multi-placed, digital music collection. Existing music player and management applications get in trouble when dealing with poor metadata that is predominant in personal music collections. There are several music information services available that assist users by providing tools for precisely organising their music collection, or for presenting them new insights into their own music library and listening habits. However, it is still not the case that music consumers can seamlessly interact with all these auxiliary services directly from the place where they access their music individually. To profit from the manifold music and music-related knowledge that is or can be available via various information services, this information has to be gathered up, semantically federated, and integrated into a uniform knowledge base that can personalised represent this data in an appropriate visualisation to the users. This personalised semantic aggregation of music metadata from several sources is the gist of this thesis. The outlined solution particularly concentrates on usersā€™ needs regarding music collection management which can strongly alternate between single human beings. The authorā€™s proposal, the personal music knowledge base (PMKB), consists of a client-server architecture with uniform communication endpoints and an ontological knowledge representation model format that is able to represent the versatile information of its use cases. The PMKB concept is appropriate to cover the complete information flow life cycle, including the processes of user account initialisation, information service choice, individual information extraction, and proactive update notification. The PMKB implementation makes use of SemanticWeb technologies. Particularly the knowledge representation part of the PMKB vision is explained in this work. Several new Semantic Web ontologies are defined or existing ones are massively modified to meet the requirements of a personalised semantic federation of music and music-related data for managing personal music collections. The outcome is, amongst others, ā€¢ a new vocabulary for describing the play back domain, ā€¢ another one for representing information service categorisations and quality ratings, and ā€¢ one that unites the beneficial parts of the existing advanced user modelling ontologies. The introduced vocabularies can be perfectly utilised in conjunction with the existing Music Ontology framework. Some RDFizers that also make use of the outlined ontologies in their mapping definitions, illustrate the fitness in practise of these specifications. A social evaluation method is applied to carry out an examination dealing with the reutilisation, application and feedback of the vocabularies that are explained in this work. This analysis shows that it is a good practise to properly publish Semantic Web ontologies with the help of some Linked Data principles and further basic SEO techniques to easily reach the searching audience, to avoid duplicates of such KR specifications, and, last but not least, to directly establish a \"shared understanding\". Due to their project-independence, the proposed vocabularies can be deployed in every knowledge representation model that needs their knowledge representation capacities. This thesis added its value to make the vision of a personal music knowledge base come true.:1 Introduction and Background 11 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2 Personal Music Collection Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2 Music Information Management 17 2.1 Knowledge Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.1 Knowledge Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.1.1.1 Knowledge Representation Models . . . . . . . . . . . . . . . . . 18 2.1.1.2 Semantic Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.1.1.3 Ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.2 Knowledge Management Systems . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.2.1 Information Services . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.2.2 Ontology-based Distributed Knowledge Management Systems . . 20 2.1.2.3 Knowledge Management System Design Guideline . . . . . . . . 21 2.1.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2 Semantic Web Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.1 The Evolution of the World Wide Web . . . . . . . . . . . . . . . . . . . . . 22 Personal Music Knowledge Base Contents 2.2.1.1 The Hypertext Web . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.1.2 The Normative Principles of Web Architecture . . . . . . . . . . . 23 2.2.1.3 The Semantic Web . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2.2 Common Semantic Web Knowledge Representation Languages . . . . . . 25 2.2.3 Resource Description Levels and their Relations . . . . . . . . . . . . . . . 26 2.2.4 Semantic Web Knowledge Representation Models . . . . . . . . . . . . . . 29 2.2.4.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.4.2 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.4.3 Context Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.2.4.4 Storing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.2.4.5 Providing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.4.6 Consuming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.3 Music Content and Context Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.3.1 Categories of Musical Characteristics . . . . . . . . . . . . . . . . . . . . . 37 2.3.2 Music Metadata Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.3.3 Music Metadata Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.3.3.1 Audio Signal Carrier Indexing Services . . . . . . . . . . . . . . . . 41 2.3.3.2 Music Recommendation and Discovery Services . . . . . . . . . . 42 2.3.3.3 Music Content and Context Analysis Services . . . . . . . . . . . 43 2.3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.4 Personalisation and Environmental Context . . . . . . . . . . . . . . . . . . . . . . 44 2.4.1 User Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.4.2 Context Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.4.3 Stereotype Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3 The Personal Music Knowledge Base 48 3.1 Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.1.1 Knowledge Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.1.2 Knowledge Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3 Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.1 User Account Initialisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.2 Individual Information Extraction . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.3 Information Service Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.3.4 Proactive Update Notification . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.3.5 Information Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.3.6 Personal Associations and Context . . . . . . . . . . . . . . . . . . . . . . . 56 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4 A Personal Music Knowledge Base 57 4.1 Knowledge Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.1.1 The Info Service Ontology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.1.2 The Play Back Ontology and related Ontologies . . . . . . . . . . . . . . . . 61 4.1.2.1 The Ordered List Ontology . . . . . . . . . . . . . . . . . . . . . . 61 4.1.2.2 The Counter Ontology . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.1.2.3 The Association Ontology . . . . . . . . . . . . . . . . . . . . . . . 64 4.1.2.4 The Play Back Ontology . . . . . . . . . . . . . . . . . . . . . . . . 65 4.1.3 The Recommendation Ontology . . . . . . . . . . . . . . . . . . . . . . . . 69 4.1.4 The Cognitive Characteristics Ontology and related Vocabularies . . . . . . 72 4.1.4.1 The Weighting Ontology . . . . . . . . . . . . . . . . . . . . . . . 72 4.1.4.2 The Cognitive Characteristics Ontology . . . . . . . . . . . . . . . 73 4.1.4.3 The Property Reification Vocabulary . . . . . . . . . . . . . . . . . 78 4.1.5 The Media Types Taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2 Knowledge Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5 Personal Music Knowledge Base in Practice 87 5.1 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.1.1 AudioScrobbler RDF Service . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.1.2 PMKB ID3 Tag Extractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.2.1 Reutilisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.2.2 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.2.3 Reviews and Mentions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.2.4 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6 Conclusion and Future Work 93 6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    Self-Supervised Representation Learning for Vocal Music Context

    Full text link
    In music and speech, meaning is derived at multiple levels of context. Affect, for example, can be inferred both by a short sound token and by sonic patterns over a longer temporal window such as an entire recording. In this paper we focus on inferring meaning from this dichotomy of contexts. We show how contextual representations of short sung vocal lines can be implicitly learned from fundamental frequency (F0F_0) and thus be used as a meaningful feature space for downstream Music Information Retrieval (MIR) tasks. We propose three self-supervised deep learning paradigms which leverage pseudotask learning of these two levels of context to produce latent representation spaces. We evaluate the usefulness of these representations by embedding unseen vocal contours into each space and conducting downstream classification tasks. Our results show that contextual representation can enhance downstream classification by as much as 15 % as compared to using traditional statistical contour features.Comment: Working on more updated versio

    Framing jazz: thoughts on representation and embodiment

    Get PDF
    Abstract: Audiovisual representations of jazz performances provide us with more information than audio recordings. The camera not only allows us access to music performances, it also constructs vantage points by framing its subjects in specific ways. This chapter explores aspects of representation and embodiment in jazz performance on film. By looking at filmic technique through shot composition, levels of close-up, focus, and so on, the chapter examines how filmic representation works to mediate the way in which viewers are directed to gaze at and understand performances. Beginning with some examples of Hollywood ā€œSoundiesā€ (including a film of the Jimmy Dorsey Orchestra) and moving to documentary-style films of jazz performance (including examples by Chet Baker and Jim Hall), this chapter shows how filmic techniques and production can serve to highlight musical hierarchies and relationships, providing a kind of commentary on the music

    One Deep Music Representation to Rule Them All? : A comparative analysis of different representation learning strategies

    Full text link
    Inspired by the success of deploying deep learning in the fields of Computer Vision and Natural Language Processing, this learning paradigm has also found its way into the field of Music Information Retrieval. In order to benefit from deep learning in an effective, but also efficient manner, deep transfer learning has become a common approach. In this approach, it is possible to reuse the output of a pre-trained neural network as the basis for a new learning task. The underlying hypothesis is that if the initial and new learning tasks show commonalities and are applied to the same type of input data (e.g. music audio), the generated deep representation of the data is also informative for the new task. Since, however, most of the networks used to generate deep representations are trained using a single initial learning source, their representation is unlikely to be informative for all possible future tasks. In this paper, we present the results of our investigation of what are the most important factors to generate deep representations for the data and learning tasks in the music domain. We conducted this investigation via an extensive empirical study that involves multiple learning sources, as well as multiple deep learning architectures with varying levels of information sharing between sources, in order to learn music representations. We then validate these representations considering multiple target datasets for evaluation. The results of our experiments yield several insights on how to approach the design of methods for learning widely deployable deep data representations in the music domain.Comment: This work has been accepted to "Neural Computing and Applications: Special Issue on Deep Learning for Music and Audio

    MARBLE: Music Audio Representation Benchmark for Universal Evaluation

    Get PDF
    In the era of extensive intersection between art and Artificial Intelligence (AI), such as image generation and fiction co-creation, AI for music remains relatively nascent, particularly in music understanding. This is evident in the limited work on deep music representations, the scarcity of large-scale datasets, and the absence of a universal and community-driven benchmark. To address this issue, we introduce the Music Audio Representation Benchmark for universaL Evaluation, termed MARBLE. It aims to provide a benchmark for various Music Information Retrieval (MIR) tasks by defining a comprehensive taxonomy with four hierarchy levels, including acoustic, performance, score, and high-level description. We then establish a unified protocol based on 14 tasks on 8 public-available datasets, providing a fair and standard assessment of representations of all open-sourced pre-trained models developed on music recordings as baselines. Besides, MARBLE offers an easy-to-use, extendable, and reproducible suite for the community, with a clear statement on copyright issues on datasets. Results suggest recently proposed large-scale pre-trained musical language models perform the best in most tasks, with room for further improvement. The leaderboard and toolkit repository are published at this https URL to promote future music AI research

    Towards a Unified Model of Chords in Western Harmony

    Get PDF
    Chord-based harmony is an important aspect of many types of Western music, across genres, regions, and historical eras. However, the consistent representation and comparison of harmony across a wide range of styles (e.g., classical music, Jazz, Rock, or Pop) is a challenging task. Moreover, even within a single musical style, multiple theories of harmony exist, each relying on its own (possibly implicit) assumptions and leading to harmonic analyses with a distinct focus (e.g., on the root of a chord vs. its bass note) or representation (e.g., spelled vs. enharmonic pitch classes). Cross-stylistic and cross-theory comparisons are therefore even more difficult, particularly in a large-scale computational setting that requires a common overarching representation. To address these problems, we propose a model which allows for the representation of chords at multiple levels of abstraction: from chord realizations on the score level (if available), to pitch-class collections (including a potential application of different equivalences, such as enharmonic or octave equivalence), to pitch- and chord-level functions and higher-order abstractions. Importantly, our proposed model is also well-defined for theories which do not specify information at each level of abstraction (e.g., some theories make no claims about harmonic function), representing only those harmonic properties that are explicitly included and inducing others where possible (e.g., deriving scale degrees from root and key information). Our model thus represents an important step towards a unified representation of harmony and its various applications.This research was supported by the Swiss National Science Foundation within the project ā€œDistant Listening ā€“ The Development of Harmony over Three Centuries (1700ā€“2000)ā€ (Grant no. 182811). This project is being conducted at the Latour Chair in Digital and Cognitive Musicology, generously funded by Mr. Claude Latour

    Studying Music is Difficult and Important: Challenges of Music Knowledge Representation

    Get PDF
    * Music is an art, so many musicians try to use its elements in interesting and original ways, not standardized and ordinary ways. (cf. Collins 2006) * Music is a performing art, so we have both performances and symbolic representations (both scores and transcriptions of performances). * Much music, especially Western, has synchronization requirements of a complexity equalled in no presentation of information for human consumption -- art form or other -- we are aware of. * Music involves many different instruments, often in groups. No other art form we know of has anything like this, and it opens up the possibility of versions of a given work for other ensembles or at other levels of technical demands. * Music is often combined with text. * Music is extremely popular, so, for many works, numerous versions actually exist. For all these reasons, music is uniquely difficult, and uniquely valuable, to deal with -- especially by computer. To support the argument, we give examples in the form of conventional Western music notation that either violate -- in several cases, blatantly -- the supposed rules of music notation, or that bring up difficult issues of music representation (see Byrd 1994 and Byrd 2009). We also give examples in audio form from some unpublished work of ours to point out the astounding range of what is considered music by one culture or another. References Byrd, Donald (1994). Music Notation Software and Intelligence. Computer Music Journal 18(1), pp. 17-20; available (in scanned form) at http://www.informatics.indiana.edu/donbyrd/Papers/MusNotSoftware+Intelligence.pdf . Byrd, Donald (2009). Gallery of Interesting Music Notation. Available at http://www.informatics.indiana.edu/donbyrd/InterestingMusicNotation.html . Collins, Nick (2006, Winter). Composing to Subvert Content Retrieval Engines. ICMA Array, Winter 2006, pp. 37-41

    Predicting performance difficulty from piano sheet music images

    Full text link
    Estimating the performance difficulty of a musical score is crucial in music education for adequately designing the learning curriculum of the students. Although the Music Information Retrieval community has recently shown interest in this task, existing approaches mainly use machine-readable scores, leaving the broader case of sheet music images unaddressed. Based on previous works involving sheet music images, we use a mid-level representation, bootleg score, describing notehead positions relative to staff lines coupled with a transformer model. This architecture is adapted to our task by introducing an encoding scheme that reduces the encoded sequence length to one-eighth of the original size. In terms of evaluation, we consider five datasets -- more than 7500 scores with up to 9 difficulty levels -- , two of them particularly compiled for this work. The results obtained when pretraining the scheme on the IMSLP corpus and fine-tuning it on the considered datasets prove the proposal's validity, achieving the best-performing model with a balanced accuracy of 40.34\% and a mean square error of 1.33. Finally, we provide access to our code, data, and models for transparency and reproducibility

    Analysis of John Fiske's Semiotics in the Representation of the Journalist Profession in the Japanese Drama 'The Journalist'

    Get PDF
    The Journalist profession has an active role in providing information to the public, making the journalist profession interesting to be raised into a movie or drama story. The Journalist is a drama series that airs on Netflix in 2022 with a drama and thriller genre adapted from the 2019 film of the same name The Journalist. Michihito Fujii's drama tells the persistence of a female journalist who is dedicated to her profession as a journalist in exposing problems in government. The purpose of this study is to determine the representation of the journalist profession in the drama series The Journalist. Based on the identification of the problem formulation, this study aims to find out how the codes that appear or are used in the drama The Journalist are interconnected so that a meaning is formed. The method used is descriptive qualitative, which is a method used to describe or explain situations or events. This research uses John Fiske's semiotic model which is divided into three levels, namely the level of reality, the level of representation, and the level of ideology. The unit of analysis chosen in reality is in the form of accessories, environment, gestures, movements, expressions, and costumes, in representation in the form of camera techniques, lighting, music, dialog, action, and characters. The reality and representation displayed form an ideology, which is a meaning contained in the drama The Journalist. Some scenes show the usual activities of a journalist such as finding news, visiting sources, attending press conferences, and making information worthy of disseminating to the public. In practice, journalists often experience difficulties in revealing the truth, especially when facing people who have power.The Journalist profession has an active role in providing information to the public, making the journalist profession interesting to be raised into a movie or drama story. The Journalist is a drama series that airs on Netflix in 2022 with a drama and thriller genre adapted from the 2019 film of the same name The Journalist. Michihito Fujii's drama tells the persistence of a female journalist who is dedicated to her profession as a journalist in exposing problems in government. The purpose of this study is to determine the representation of the journalist profession in the drama series The Journalist. Based on the identification of the problem formulation, this study aims to find out how the codes that appear or are used in the drama The Journalist are interconnected so that a meaning is formed. The method used is descriptive qualitative, which is a method used to describe or explain situations or events. This research uses John Fiske's semiotic model which is divided into three levels, namely the level of reality, the level of representation, and the level of ideology. The unit of analysis chosen in reality is in the form of accessories, environment, gestures, movements, expressions, and costumes, in representation in the form of camera techniques, lighting, music, dialog, action, and characters. The reality and representation displayed form an ideology, which is a meaning contained in the drama The Journalist. Some scenes show the usual activities of a journalist such as finding news, visiting sources, attending press conferences, and making information worthy of disseminating to the public. In practice, journalists often experience difficulties in revealing the truth, especially when facing people who have power

    Generation of folk song melodies using Bayes transforms

    Get PDF
    The paper introduces the `Bayes transform', a mathematical procedure for putting data into a hierarchical representation. Applicable to any type of data, the procedure yields interesting results when applied to sequences. In this case, the representation obtained implicitly models the repetition hierarchy of the source. There are then natural applications to music. Derivation of Bayes transforms can be the means of determining the repetition hierarchy of note sequences (melodies) in an empirical and domain-general way. The paper investigates application of this approach to Folk Song, examining the results that can be obtained by treating such transforms as generative models
    • ā€¦
    corecore