8,899 research outputs found
Recommended from our members
Big Music Data, Musicology, and the Study of Recorded Music: Three Case Studies
This paper considers some of the interactions between Music Information Retrieval (MIR) and musicology, particularly in relation to Big Music Data and the analysis of recorded music. Since MIR is still not widely recognized within the musicological community, and the possible insights offered by analyzing Big Music Data even less so, the paper both briefly contextualizes some of this work for a musicological readership and provides three specific case studies that illustrate concrete musicological outcomes. These relate to: changing orchestral pitch over time; pulse salience beyond the EuroAmerican classical music tradition; and changing performance tempi in classical music. The paper concludes by considering some broader conceptual issues that arise from the relationship between Big Music Data, musicology and recorded music
Recommended from our members
The digital music lab: A big data infrastructure for digital musicology
In musicology and music research generally, the increasing availability of digital music, storage capacities, and computing power enable and require new and intelligent systems. In the transition from traditional to digital musicology, many techniques and tools have been developed for the analysis of individual pieces of music, but large-scale music data that are increasingly becoming available require research methods and systems that work on the collection-level and at scale. Although many relevant algorithms have been developed during the past 15 years of research in Music Information Retrieval, an integrated system that supports large-scale digital musicology research has so far been lacking. In the Digital Music Lab (DML) project, a collaboration among music librarians, musicologists, computer scientists, and human-computer interface specialists, the DML software system has been developed that fills this gap by providing intelligent large-scale music analysis with a user-friendly interactive interface supporting musicologists in their exploration and enquiry. The DML system empowers musicologists by addressing several challenges: distributed processing of audio and other music data, management of the data analysis process and results, remote analysis of data under copyright, logical inference on the extracted information and metadata, and visual web-based interfaces for exploring and querying the music collections. The DML system is scalable and based on SemanticWeb technology and integrates into Linked Data with the vision of a distributed system that enables music research across archives, libraries, and other providers of music data. A first DML system prototype has been set up in collaboration with the British Library and I Like Music Ltd. This system has been used to analyse a diverse corpus of currently 250,000 music tracks. In this article, we describe the DML system requirements, design, architecture, components, and available data sources, explaining their interaction. We report use cases and applications with initial evaluations of the proposed system
The Semantic Web MIDI Tape: An Interface for Interlinking MIDI and Context Metadata
The Linked Data paradigm has been used to publish a large number of musical datasets and ontologies on the Semantic Web, such as MusicBrainz, AcousticBrainz, and the Music Ontology. Recently, the MIDI Linked Data Cloud has been added to these datasets, representing more than 300,000 pieces in MIDI format as Linked Data, opening up the possibility for linking fine-grained symbolic music representations to existing music metadata databases. Despite the dataset making MIDI resources available in Web data standard formats such as RDF and SPARQL, the important issue of finding meaningful links between these MIDI resources and relevant contextual metadata in other datasets remains. A fundamental barrier for the provision and generation of such links is the difficulty that users have at adding new MIDI performance data and metadata to the platform. In this paper, we propose the Semantic Web MIDI Tape, a set of tools and associated interface for interacting with the MIDI Linked Data Cloud by enabling users to record, enrich, and retrieve MIDI performance data and related metadata in native Web data standards. The goal of such interactions is to find meaningful links between published MIDI resources and their relevant contextual metadata. We evaluate the Semantic Web MIDI Tape in various use cases involving user-contributed content, MIDI similarity querying, and entity recognition methods, and discuss their potential for finding links between MIDI resources and metadata
Interim Report: Assessing the Future Landscape of Scholarly Communication
The Center for Studies in Higher Education, with generous funding from the Andrew W. Mellon Foundation, is conducting research to understand the needs and desires of faculty for inprogress scholarly communication (i.e., forms of communication employed as research is being executed) as well as archival publication. In the interest of developing a deeper understanding of how and why scholars do what they do to advance their fields, as well as their careers, our approach focuses in fine-grained analyses of faculty values and behaviors throughout the scholarly communication lifecycle, including sharing, collaborating, publishing, and engaging with the public. Well into our second year, we have posted a draft interim report describing some of our early results and impressions ased on the responses of more than 150 interviewees in the fields of astrophysics, archaeology, biology, economics, history, music, and political science.Our work to date has confirmed the important impact of disciplinary culture and tradition on many scholarly communication habits. These traditions may override the perceived "opportunities" afforded by new technologies, including those falling into the Web 2.0 category. As we have listened to our diverse informants, as well as followed closely the prognostications about the likely future of scholarly communication, we note that it is absolutely imperative to be precise about terms. That includes being clear about what is meant by "open access" publishing (i.e., using preprint or postprint servers for scholarship published in prestigious outlets versus publishing in new, untested open access journals, or the more casual individual posting of working papers, blogs, and other non-peer-reviewed work). Our research suggests that enthusiasm for technology development and adoption should not be conflated with the hard reality of tenure and promotion requirements (including the needs and goals of final archival publication) in highly competitive professional environments
Using Automated Rhyme Detection to Characterize Rhyming Style in Rap Music
Imperfect and internal rhymes are two important features in rap music previously ignored in the music information retrieval literature. We developed a method of scoring potential rhymes using a probabilistic model based on phoneme frequencies in rap lyrics. We used this scoring scheme to automatically identify internal and line-final rhymes in song lyrics and demonstrated the performance of this method compared to rules-based models. We then calculated higher-level rhyme features and used them to compare rhyming styles in song lyrics from different genres, and for different rap artists. We found that these detected features corresponded to real- world descriptions of rhyming style and were strongly characteristic of different rappers, resulting in potential applications to style-based comparison, music recommendation, and authorship identification
Recommended from our members
Big Chord Data Extraction and Mining
Harmonic progression is one of the cornerstones of tonal music composition and is thereby essential to many musical styles and traditions. Previous studies have shown that musical genres and composers could be discriminated based on chord progressions modeled as chord n-grams. These studies were however conducted on small-scale datasets and using symbolic music transcriptions.
In this work, we apply pattern mining techniques to over 200,000 chord progression sequences out of 1,000,000 extracted from the I Like Music (ILM) commercial music audio collection. The ILM collection spans 37 musical genres and includes pieces released between 1907 and 2013. We developed a single program multiple data parallel computing approach whereby audio feature extraction tasks are split up and run simultaneously on multiple cores. An audio-based chord recognition model (Vamp plugin Chordino) was used to extract the chord progressions from the ILM set. To keep low-weight feature sets, the chord data were stored using a compact binary format. We used the CM-SPADE algorithm, which performs a vertical mining of sequential patterns using co-occurence information, and which is fast and efficient enough to be applied to big data collections like the ILM set. In orderto derive key-independent frequent patterns, the transition between chords are modeled by changes of qualities (e.g. major, minor, etc.) and root keys (e.g. fourth, fifth, etc.). The resulting key-independent chord progression patterns vary in length (from 2 to 16) and frequency (from 2 to 19,820) across genres. As illustrated by graphs generated to represent frequent 4-chord progressions, some patterns like circle-of-fifths movements are well represented in most genres but in varying degrees.
These large-scale results offer the opportunity to uncover similarities and discrepancies between sets of musical pieces and therefore to build classifiers for search and recommendation. They also support the empirical testing of music theory. It is however more difficult to derive new hypotheses from such dataset due to its size. This can be addressed by using pattern detection algorithms or suitable visualisation which we present in a companion study
Big data optical music recognition with multi images and multi recognisers
In this paper we describe work in progress towards Multi-OMR, an approach to Optical Music Recognition (OMR) which aims to significantly improve the accuracy of musical score digitisation. There are a large number of scores available in public databases, as well as a range of different commercial and open source OMR tools. Using these resources, we are exploring a Big Data approach to harnessing datasets by aligning and combining the results of multiple versions of the same score, processed with multiple technologies. It is anticipated that this approach will yield high quality results, opening up large datasets to researchers in the field of digital musicology
- …