1,008 research outputs found

    A Cross-Version Approach for Harmonic Analysis of Music Recordings

    Get PDF
    The automated extraction of chord labels from audio recordings is a central task in music information retrieval. Here, the chord labeling is typically performed on a specific audio version of a piece of music, produced under certain recording conditions, played on specific instruments and characterized by individual styles of the musicians. As a consequence, the obtained chord labeling results are strongly influenced by version-dependent characteristics. In this chapter, we show that analyzing the harmonic properties of several audio versions synchronously stabilizes the chord labeling result in the sense that inconsistencies indicate version-dependent characteristics, whereas consistencies across several versions indicate harmonically stable passages in the piece of music. In particular, we show that consistently labeled passages often correspond to correctly labeled passages. Our experiments show that the cross-version labeling procedure significantly increases the precision of the result while keeping the recall at a relatively high level. Furthermore, we introduce a powerful visualization which reveals the harmonically stable passages on a musical time axis specified in bars. Finally, we demonstrate how this visualization facilitates a better understanding of classification errors and may be used by music experts as a helpful tool for exploring harmonic structures

    Automated methods for audio-based music analysis with applications to musicology

    Get PDF
    This thesis contributes to bridging the gap between music information retrieval (MIR) and musicology. We present several automated methods for music analysis, which are motivated by concrete application scenarios being of central importance in musicology. In this context, the automated music analysis is performed on the basis of audio material. Here, one reason is that for a given piece of music usually many different recorded performances exist. The availability of multiple versions of a piece of music is exploited in this thesis to stabilize analysis results. We show how the presented automated methods open up new possibilities for supporting musicologists in their work. Furthermore, we introduce novel interdisciplinary concepts which facilitate the collaboration between computer scientists and musicologists. Based on these concepts, we demonstrate how MIR researchers and musicologists may greatly benefit from each other in an interdisciplinary collaboration. Firstly, we present a fully automatic approach for the extraction of tempo parameters from audio recordings and show to which extent this approach may support musicologists in analyzing recorded performances. Secondly, we introduce novel user interfaces which are aimed at encouraging the exchange between computer science and musicology. In this context, we indicate the potential of computer-based methods in music education by testing and evaluating a novel MIR user interface at the University of Music SaarbrĂŒcken. Furthermore, we show how a novel multi-perspective user interface allows for interactively viewing and evaluating version-dependent analysis results and opens up new possibilities for interdisciplinary collaborations. Thirdly, we present a cross-version approach for harmonic analysis of audio recordings and demonstrate how this approach enables musicologists to explore harmonic structures even across large music corpora. Here, one simple yet important conceptual contribution is to convert the physical time axis of an audio recording into a performance-independent musical time axis given in bars.Diese Arbeit trĂ€gt dazu bei, die BrĂŒcke zwischen der automatisierten Musikverarbeitung und der Musikwissenschaft zu schlagen. Ausgehend von Anwendungen, die in der Musikwissenschaft von zentraler Bedeutung sind, stellen wir verschiedene automatisierte Verfahren vor. Die automatisierte Musikanalyse wird hierbei auf der Basis von Audiodaten durchgefĂŒhrt. Ein Grund hierfĂŒr ist, dass zu einem gegebenen MusikstĂŒck ĂŒblicherweise viele verschiedene Aufnahmen existieren. Die VerfĂŒgbarkeit mehrerer Versionen zu ein und demselben MusikstĂŒck wird in dieser Arbeit ausgenutzt, um Analyseresultate zu stabilisieren. Wir demonstrieren, inwieweit die vorgestellten automatisierten Methoden neue Möglichkeiten eröffnen, Musikwissenschaftler in ihrer Arbeit zu unterstĂŒtzen. Außerdem fĂŒhren wir neue interdisziplinĂ€re Konzepte ein, die die Kollaboration zwischen Informatikern und Musikwissenschaftlern erleichtern. Auf der Basis dieser Konzepte zeigen wir, dass Informatiker und Musikwissenschaftler im Rahmen einer interdisziplinĂ€ren Kollaboration erheblich voneinander profitieren können. Erstens stellen wir ein vollautomatisches Verfahren zur Extraktion von Tempoparametern aus Audioaufnahmen vor und zeigen, inwieweit dieses Verfahren Musikwissenschaftler bei der Interpretationsanalyse verschiedener Aufnahmen unterstĂŒtzen kann. Zweitens fĂŒhren wir neuartige Benutzerschnittstellen ein, die darauf abzielen, den Austausch zwischen der Informatik und der Musikwissenschaft zu fördern. In diesem Zusammenhang testen und evaluieren wir eine Benutzerschnittstelle an der Hochschule fĂŒr Musik Saar und deuten auf diese Weise das Potential computer-basierter Methoden im Bereich der Musikerziehung an. Weiterhin stellen wir eine neuartige Benutzerschnittstelle vor, die es auf interaktive Weise ermöglicht, verschiedene Sichtweisen auf versionsabhĂ€ngige Analyseresultate einzunehmen und diese auszuwerten. Diese Benutzerschnittstelle eröffnet neue Möglichkeiten fĂŒr interdisziplinĂ€re Kollaborationen. Drittens zeigen wir, wie eine cross-version harmonische Analyse es Musikwissenschaftlern ermöglicht, harmonische Strukturen ĂŒber riesige musikalische Werkzyklen hinweg zu ergrĂŒnden. In diesem Zusammenhang ist ein einfacher aber wichtiger konzeptueller Beitrag, die physikalische Zeitachse einer Audioaufnahme in eine versionsunabhĂ€ngige musikalische Zeitachse gegeben in Takten zu verwandeln

    Musical components important for the Mozart K448 effect in epilepsy

    Get PDF
    There is growing evidence for the efficacy of music, specifically Mozart’s Sonata for Two Pianos in D Major (K448), at reducing ictal and interictal epileptiform activity. Nonetheless, little is known about the mechanism underlying this beneficial “Mozart K448 effect” for persons with epilepsy. Here, we measured the influence that K448 had on intracranial interictal epileptiform discharges (IEDs) in sixteen subjects undergoing intracranial monitoring for refractory focal epilepsy. We found reduced IEDs during the original version of K448 after at least 30-s of exposure. Nonsignificant IED rate reductions were witnessed in all brain regions apart from the bilateral frontal cortices, where we observed increased frontal theta power during transitions from prolonged musical segments. All other presented musical stimuli were associated with nonsignificant IED alterations. These results suggest that the “Mozart K448 effect” is dependent on the duration of exposure and may preferentially modulate activity in frontal emotional networks, providing insight into the mechanism underlying this response. Our findings encourage the continued evaluation of Mozart’s K448 as a noninvasive, non-pharmacological intervention for refractory epilepsy

    Utilizing Computational Music Analysis and AI for Enhanced Music Composition: Exploring Pre- and Post-Analysis

    Get PDF
    This research paper investigates the transformative potential of computational music analysis and artificial intelligence (AI) in advancing the field of music composition. Specifically, it explores the synergistic roles of pre-analysis and post-analysis techniques in leveraging AI-driven tools to enhance the creative process and quality of musical compositions. The study encompasses a historical overview of music composition, the evolution of computational music analysis, and contemporary AI applications. It delves into pre-analysis, focusing on its role in informing composition, and post-analysis, which evaluates and augments compositions. The paper underscores the significance of these technologies in fostering creativity while addressing challenges and ethical considerations. Through case studies, evaluations, and discussions, this research offers insights into the profound impact of computational music analysis and AI on music composition, paving the way for innovative and inclusive musical expressions.   &nbsp

    Measuring Expressive Music Performances: a Performance Science Model using Symbolic Approximation

    Get PDF
    Music Performance Science (MPS), sometimes termed systematic musicology in Northern Europe, is concerned with designing, testing and applying quantitative measurements to music performances. It has applications in art musics, jazz and other genres. It is least concerned with aesthetic judgements or with ontological considerations of artworks that stand alone from their instantiations in performances. Musicians deliver expressive performances by manipulating multiple, simultaneous variables including, but not limited to: tempo, acceleration and deceleration, dynamics, rates of change of dynamic levels, intonation and articulation. There are significant complexities when handling multivariate music datasets of significant scale. A critical issue in analyzing any types of large datasets is the likelihood of detecting meaningless relationships the more dimensions are included. One possible choice is to create algorithms that address both volume and complexity. Another, and the approach chosen here, is to apply techniques that reduce both the dimensionality and numerosity of the music datasets while assuring the statistical significance of results. This dissertation describes a flexible computational model, based on symbolic approximation of timeseries, that can extract time-related characteristics of music performances to generate performance fingerprints (dissimilarities from an ‘average performance’) to be used for comparative purposes. The model is applied to recordings of Arnold Schoenberg’s Phantasy for Violin with Piano Accompaniment, Opus 47 (1949), having initially been validated on Chopin Mazurkas.1 The results are subsequently used to test hypotheses about evolution in performance styles of the Phantasy since its composition. It is hoped that further research will examine other works and types of music in order to improve this model and make it useful to other music researchers. In addition to its benefits for performance analysis, it is suggested that the model has clear applications at least in music fraud detection, Music Information Retrieval (MIR) and in pedagogical applications for music education

    Timbre-invariant Audio Features for Style Analysis of Classical Music

    Get PDF
    Copyright: (c) 2014 Christof Weiß et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

    Music Synchronization, Audio Matching, Pattern Detection, and User Interfaces for a Digital Music Library System

    Get PDF
    Over the last two decades, growing efforts to digitize our cultural heritage could be observed. Most of these digitization initiatives pursuit either one or both of the following goals: to conserve the documents - especially those threatened by decay - and to provide remote access on a grand scale. For music documents these trends are observable as well, and by now several digital music libraries are in existence. An important characteristic of these music libraries is an inherent multimodality resulting from the large variety of available digital music representations, such as scanned score, symbolic score, audio recordings, and videos. In addition, for each piece of music there exists not only one document of each type, but many. Considering and exploiting this multimodality and multiplicity, the DFG-funded digital library initiative PROBADO MUSIC aimed at developing a novel user-friendly interface for content-based retrieval, document access, navigation, and browsing in large music collections. The implementation of such a front end requires the multimodal linking and indexing of the music documents during preprocessing. As the considered music collections can be very large, the automated or at least semi-automated calculation of these structures would be recommendable. The field of music information retrieval (MIR) is particularly concerned with the development of suitable procedures, and it was the goal of PROBADO MUSIC to include existing and newly developed MIR techniques to realize the envisioned digital music library system. In this context, the present thesis discusses the following three MIR tasks: music synchronization, audio matching, and pattern detection. We are going to identify particular issues in these fields and provide algorithmic solutions as well as prototypical implementations. In Music synchronization, for each position in one representation of a piece of music the corresponding position in another representation is calculated. This thesis focuses on the task of aligning scanned score pages of orchestral music with audio recordings. Here, a previously unconsidered piece of information is the textual specification of transposing instruments provided in the score. Our evaluations show that the neglect of such information can result in a measurable loss of synchronization accuracy. Therefore, we propose an OCR-based approach for detecting and interpreting the transposition information in orchestral scores. For a given audio snippet, audio matching methods automatically calculate all musically similar excerpts within a collection of audio recordings. In this context, subsequence dynamic time warping (SSDTW) is a well-established approach as it allows for local and global tempo variations between the query and the retrieved matches. Moving to real-life digital music libraries with larger audio collections, however, the quadratic runtime of SSDTW results in untenable response times. To improve on the response time, this thesis introduces a novel index-based approach to SSDTW-based audio matching. We combine the idea of inverted file lists introduced by Kurth and MĂŒller (Efficient index-based audio matching, 2008) with the shingling techniques often used in the audio identification scenario. In pattern detection, all repeating patterns within one piece of music are determined. Usually, pattern detection operates on symbolic score documents and is often used in the context of computer-aided motivic analysis. Envisioned as a new feature of the PROBADO MUSIC system, this thesis proposes a string-based approach to pattern detection and a novel interactive front end for result visualization and analysis

    A Survey of Music Generation in the Context of Interaction

    Full text link
    In recent years, machine learning, and in particular generative adversarial neural networks (GANs) and attention-based neural networks (transformers), have been successfully used to compose and generate music, both melodies and polyphonic pieces. Current research focuses foremost on style replication (eg. generating a Bach-style chorale) or style transfer (eg. classical to jazz) based on large amounts of recorded or transcribed music, which in turn also allows for fairly straight-forward "performance" evaluation. However, most of these models are not suitable for human-machine co-creation through live interaction, neither is clear, how such models and resulting creations would be evaluated. This article presents a thorough review of music representation, feature analysis, heuristic algorithms, statistical and parametric modelling, and human and automatic evaluation measures, along with a discussion of which approaches and models seem most suitable for live interaction

    Proceedings of the 2015 WA Chapter of MSA Symposium on Music Performance and Analysis

    Get PDF
    This publication, entitled Proceedings of the 2015 WA Chapter MSA Symposium on Music Performance and Analysis, is a double-blind peer-reviewed conference proceedings published by the Western Australian Chapter of the Musicological Society of Australia, in conjunction with the Western Australian Academy of Performing Arts, Edith Cowan University, edited by Jonathan Paget, Victoria Rogers, and Nicholas Bannan. The original symposium was held at the University of Western Australia, School of Music, on 12 December 2015. With the advent of performer-scholars within Australian Universities, the intersections between analytical knowledge and performance are constantly being re-evaluated and reinvented. This collection of papers presents several strands of analytical discourse, including: (1) the analysis of music recordings, particularly in terms of historical performance practices; (2) reinventions of the \u27page-to-stage\u27 paradigm, employing new analytical methods; (3) analytical knowledge applied to pedagogy, particularly concerning improvisation; and (4) so-called \u27practice-led\u27 research.https://ro.ecu.edu.au/ecubooks/1005/thumbnail.jp

    Discovering simple rules in complex data: A meta-learning algorithm and some surprising musical discoveries

    Get PDF
    AbstractThis article presents a new rule discovery algorithm named PLCG that can find simple, robust partial rule models (sets of classification rules) in complex data where it is difficult or impossible to find models that completely account for all the phenomena of interest. Technically speaking, PLCG is an ensemble learning method that learns multiple models via some standard rule learning algorithm, and then combines these into one final rule set via clustering, generalization, and heuristic rule selection. The algorithm was developed in the context of an interdisciplinary research project that aims at discovering fundamental principles of expressive music performance from large amounts of complex real-world data (specifically, measurements of actual performances by concert pianists). It will be shown that PLCG succeeds in finding some surprisingly simple and robust performance principles, some of which represent truly novel and musically meaningful discoveries. A set of more systematic experiments shows that PLCG usually discovers significantly simpler theories than more direct approaches to rule learning (including the state-of-the-art learning algorithm Ripper), while striking a compromise between coverage and precision. The experiments also show how easy it is to use PLCG as a meta-learning strategy to explore different parts of the space of rule models
    • 

    corecore