36 research outputs found
No Longer ‘Somewhat Arbitrary’:Calculating Salience in GTTM-Style Reduction
Following earlier work on the formalisation of Lerdahl and Jackendoff’s Generative Theory of Tonal Music (GTTM), we present a measure of the salience of events in a reduction tree, based on calculations relating the duration of time-spans to the structure of the tree. This allows for the proper graphical rendition of a tree on the basis of its time-spans and topology alone. It also has the potential to contribute to the development of sophisticated digital library systems able to operate on music in a musically intelligent manner. We present results of an empirical study of branch heights in the figures in GTTM which shows that salience calculated according to our proposals correlates better with branch height than alternatives. We also discuss the possible musical significance of this measure of salience. Finally we compare some results using salience in the calculation of melodic similarity on the basis of reduction trees to earlier results using time-span. While the correlation between these measures and human ratings of the similarity of the melodies is poor, using salience shows a definite improvement. Overall, the results suggest that the proposed definition of salience gives a potentially useful measure of an event’s importance in a musical structure
Methods and Technologies for the Analysis and Interactive Use of Body Movements in Instrumental Music Performance
List of related publications: http://www.federicovisi.com/publications/A constantly growing corpus of interdisciplinary studies support the idea that music is a complex multimodal medium that is experienced not only by means of sounds but also through body movement. From this perspective, musical instruments can be seen as technological objects coupled with a repertoire of performance gestures. This repertoire is part of an ecological knowledge shared by musicians and listeners alike. It is part of the engine that guides musical experience and has a considerable expressive potential.
This thesis explores technical and conceptual issues related to the analysis and creative use of music-related body movements in instrumental music performance. The complexity of this subject required an interdisciplinary approach, which includes the review of multiple theoretical accounts, quantitative and qualitative analysis of data collected in motion capture laboratories, the development and implementation of technologies for the interpretation and interactive use of motion data, and the creation of short musical pieces that actively employ the movement of the performers as an expressive musical feature.
The theoretical framework is informed by embodied and enactive accounts of music cognition as well as by systematic studies of music-related movement and expressive music performance.
The assumption that the movements of a musician are part of a shared knowledge is empirically explored through an experiment aimed at analysing the motion capture data of a violinist performing a selection of short musical excerpts. A group of subjects with no prior experience playing the violin is then asked to mime a performance following the audio excerpts recorded by the violinist. Motion data is recorded, analysed, and compared with the expert’s data. This is done both quantitatively through data analysis
xii
as well as qualitatively by relating the motion data to other high-level features and structures of the musical excerpts.
Solutions to issues regarding capturing and storing movement data and its use in real-time scenarios are proposed. For the interactive use of motion-sensing technologies in music performance, various wearable sensors have been employed, along with different approaches for mapping control data to sound synthesis and signal processing parameters. In particular, novel approaches for the extraction of meaningful features from raw sensor data and the use of machine learning techniques for mapping movement to live electronics are described.
To complete the framework, an essential element of this research project is the com- position and performance of études that explore the creative use of body movement in instrumental music from a Practice-as-Research perspective. This works as a test bed for the proposed concepts and techniques. Mapping concepts and technologies are challenged in a scenario constrained by the use of musical instruments, and different mapping ap- proaches are implemented and compared. In addition, techniques for notating movement in the score, and the impact of interactive motion sensor systems in instrumental music practice from the performer’s perspective are discussed. Finally, the chapter concluding the part of the thesis dedicated to practical implementations describes a novel method for mapping movement data to sound synthesis. This technique is based on the analysis of multimodal motion data collected from multiple subjects and its design draws from the theoretical, analytical, and practical works described throughout the dissertation.
Overall, the parts and the diverse approaches that constitute this thesis work in synergy, contributing to the ongoing discourses on the study of musical gestures and the design of interactive music systems from multiple angles
Automatic Transcription of Bass Guitar Tracks applied for Music Genre Classification and Sound Synthesis
Musiksignale bestehen in der Regel aus einer Überlagerung mehrerer
Einzelinstrumente. Die meisten existierenden Algorithmen zur automatischen
Transkription und Analyse von Musikaufnahmen im Forschungsfeld des Music
Information Retrieval (MIR) versuchen, semantische Information direkt aus
diesen gemischten Signalen zu extrahieren. In den letzten Jahren wurde
häufig beobachtet, dass die Leistungsfähigkeit dieser Algorithmen durch
die Signalüberlagerungen und den daraus resultierenden Informationsverlust
generell limitiert ist. Ein möglicher Lösungsansatz besteht darin,
mittels Verfahren der Quellentrennung die beteiligten Instrumente vor der
Analyse klanglich zu isolieren. Die Leistungsfähigkeit dieser Algorithmen
ist zum aktuellen Stand der Technik jedoch nicht immer ausreichend, um eine
sehr gute Trennung der Einzelquellen zu ermöglichen. In dieser Arbeit
werden daher ausschließlich isolierte Instrumentalaufnahmen untersucht,
die klanglich nicht von anderen Instrumenten überlagert sind. Exemplarisch
werden anhand der elektrischen Bassgitarre auf die Klangerzeugung dieses
Instrumentes hin spezialisierte Analyse- und Klangsynthesealgorithmen
entwickelt und evaluiert.Im ersten Teil der vorliegenden Arbeit wird ein
Algorithmus vorgestellt, der eine automatische Transkription von
Bassgitarrenaufnahmen durchführt. Dabei wird das Audiosignal durch
verschiedene Klangereignisse beschrieben, welche den gespielten Noten auf
dem Instrument entsprechen. Neben den üblichen Notenparametern Anfang,
Dauer, Lautstärke und Tonhöhe werden dabei auch instrumentenspezifische
Parameter wie die verwendeten Spieltechniken sowie die Saiten- und Bundlage
auf dem Instrument automatisch extrahiert. Evaluationsexperimente anhand
zweier neu erstellter Audiodatensätze belegen, dass der vorgestellte
Transkriptionsalgorithmus auf einem Datensatz von realistischen
Bassgitarrenaufnahmen eine höhere Erkennungsgenauigkeit erreichen kann als
drei existierende Algorithmen aus dem Stand der Technik. Die Schätzung der
instrumentenspezifischen Parameter kann insbesondere für isolierte
Einzelnoten mit einer hohen Güte durchgeführt werden.Im zweiten Teil der
Arbeit wird untersucht, wie aus einer Notendarstellung typischer sich
wieder- holender Basslinien auf das Musikgenre geschlossen werden kann.
Dabei werden Audiomerkmale extrahiert, welche verschiedene tonale,
rhythmische, und strukturelle Eigenschaften von Basslinien quantitativ
beschreiben. Mit Hilfe eines neu erstellten Datensatzes von 520 typischen
Basslinien aus 13 verschiedenen Musikgenres wurden drei verschiedene
Ansätze für die automatische Genreklassifikation verglichen. Dabei zeigte
sich, dass mit Hilfe eines regelbasierten Klassifikationsverfahrens nur
Anhand der Analyse der Basslinie eines Musikstückes bereits eine mittlere
Erkennungsrate von 64,8 % erreicht werden konnte.Die Re-synthese der
originalen Bassspuren basierend auf den extrahierten Notenparametern wird
im dritten Teil der Arbeit untersucht. Dabei wird ein neuer
Audiosynthesealgorithmus vorgestellt, der basierend auf dem Prinzip des
Physical Modeling verschiedene Aspekte der für die Bassgitarre
charakteristische Klangerzeugung wie Saitenanregung, Dämpfung, Kollision
zwischen Saite und Bund sowie dem Tonabnehmerverhalten nachbildet.
Weiterhin wird ein parametrischerAudiokodierungsansatz diskutiert, der es
erlaubt, Bassgitarrenspuren nur anhand der ermittel- ten notenweisen
Parameter zu übertragen um sie auf Dekoderseite wieder zu
resynthetisieren. Die Ergebnisse mehrerer Hötest belegen, dass der
vorgeschlagene Synthesealgorithmus eine Re- Synthese von
Bassgitarrenaufnahmen mit einer besseren Klangqualität ermöglicht als die
Ãœbertragung der Audiodaten mit existierenden Audiokodierungsverfahren, die
auf sehr geringe Bitraten ein gestellt sind.Music recordings most often consist of multiple instrument signals, which
overlap in time and frequency. In the field of Music Information Retrieval
(MIR), existing algorithms for the automatic transcription and analysis of
music recordings aim to extract semantic information from mixed audio
signals. In the last years, it was frequently observed that the algorithm
performance is limited due to the signal interference and the resulting
loss of information. One common approach to solve this problem is to first
apply source separation algorithms to isolate the present musical
instrument signals before analyzing them individually. The performance of
source separation algorithms strongly depends on the number of instruments
as well as on the amount of spectral overlap.In this thesis, isolated
instrumental tracks are analyzed in order to circumvent the challenges of
source separation. Instead, the focus is on the development of
instrument-centered signal processing algorithms for music transcription,
musical analysis, as well as sound synthesis. The electric bass guitar is
chosen as an example instrument. Its sound production principles are
closely investigated and considered in the algorithmic design.In the first
part of this thesis, an automatic music transcription algorithm for
electric bass guitar recordings will be presented. The audio signal is
interpreted as a sequence of sound events, which are described by various
parameters. In addition to the conventionally used score-level parameters
note onset, duration, loudness, and pitch, instrument-specific parameters
such as the applied instrument playing techniques and the geometric
position on the instrument fretboard will be extracted. Different
evaluation experiments confirmed that the proposed transcription algorithm
outperformed three state-of-the-art bass transcription algorithms for the
transcription of realistic bass guitar recordings. The estimation of the
instrument-level parameters works with high accuracy, in particular for
isolated note samples.In the second part of the thesis, it will be
investigated, whether the sole analysis of the bassline of a music piece
allows to automatically classify its music genre. Different score-based
audio features will be proposed that allow to quantify tonal, rhythmic, and
structural properties of basslines. Based on a novel data set of 520
bassline transcriptions from 13 different music genres, three approaches
for music genre classification were compared. A rule-based classification
system could achieve a mean class accuracy of 64.8 % by only taking
features into account that were extracted from the bassline of a music
piece.The re-synthesis of a bass guitar recordings using the previously
extracted note parameters will be studied in the third part of this thesis.
Based on the physical modeling of string instruments, a novel sound
synthesis algorithm tailored to the electric bass guitar will be presented.
The algorithm mimics different aspects of the instrument’s sound
production mechanism such as string excitement, string damping, string-fret
collision, and the influence of the electro-magnetic pickup. Furthermore, a
parametric audio coding approach will be discussed that allows to encode
and transmit bass guitar tracks with a significantly smaller bit rate than
conventional audio coding algorithms do. The results of different listening
tests confirmed that a higher perceptual quality can be achieved if the
original bass guitar recordings are encoded and re-synthesized using the
proposed parametric audio codec instead of being encoded using conventional
audio codecs at very low bit rate settings
Measuring Expressive Music Performances: a Performance Science Model using Symbolic Approximation
Music Performance Science (MPS), sometimes termed systematic musicology in Northern Europe, is concerned with designing, testing and applying quantitative measurements to music performances. It has applications in art musics, jazz and other genres. It is least concerned with aesthetic judgements or with ontological considerations of artworks that stand alone from their instantiations in performances. Musicians deliver expressive performances by manipulating multiple, simultaneous variables including, but not limited to: tempo, acceleration and deceleration, dynamics, rates of change of dynamic levels, intonation and articulation. There are significant complexities when handling multivariate music datasets of significant scale. A critical issue in analyzing any types of large datasets is the likelihood of detecting meaningless relationships the more dimensions are included. One possible choice is to create algorithms that address both volume and complexity. Another, and the approach chosen here, is to apply techniques that reduce both the dimensionality and numerosity of the music datasets while assuring the statistical significance of results. This dissertation describes a flexible computational model, based on symbolic approximation of timeseries, that can extract time-related characteristics of music performances to generate performance fingerprints (dissimilarities from an ‘average performance’) to be used for comparative purposes. The model is applied to recordings of Arnold Schoenberg’s Phantasy for Violin with Piano Accompaniment, Opus 47 (1949), having initially been validated on Chopin Mazurkas.1 The results are subsequently used to test hypotheses about evolution in performance styles of the Phantasy since its composition. It is hoped that further research will examine other works and types of music in order to improve this model and make it useful to other music researchers. In addition to its benefits for performance analysis, it is suggested that the model has clear applications at least in music fraud detection, Music Information Retrieval (MIR) and in pedagogical applications for music education
BRAIN-COMPUTER MUSIC INTERFACING: DESIGNING PRACTICAL SYSTEMS FOR CREATIVE APPLICATIONS
Brain-computer music interfacing (BCMI) presents a novel approach to music making, as it requires only the brainwaves of a user to control musical parameters. This presents immediate benefits for users with motor disabilities that may otherwise prevent them from engaging in traditional musical activities such as composition, performance or collaboration with other musicians. BCMI systems with active control, where a user can make cognitive choices that are detected within brain signals, provide a platform for developing new approaches towards accomplishing these activities. BCMI systems that use passive control present an interesting alternate to active control, where control over music is accomplished by harnessing brainwave patterns that are associated with subconscious mental states. Recent developments in brainwave measuring technologies, in particular electroencephalography (EEG), have made brainwave interaction with computer systems more affordable and accessible and the time is ripe for research into the potential such technologies can offer for creative applications for users of all abilities.
This thesis presents an account of BCMI development that investigates methods of active, passive and hybrid (multiple control methods) control that include control over electronic music, acoustic instrumental music, multi-brain systems and combining methods of brainwave control.
In practice there are many obstacles associated with detecting useful brainwave signals, in particular when scaling systems otherwise designed for medical studies for use outside of laboratory settings. Two key areas are addressed throughout this thesis. Firstly, improving the accuracy of meaningful brain signal detection in BCMI, and secondly, exploring the creativity available in user control through ways in which brainwaves can be mapped to musical features.
Six BCMIs are presented in this thesis, each with the objective of exploring a unique aspect of user control. Four of these systems are designed for live BCMI concert performance, one evaluates a proof-of-concept through end-user testing and one is designed as a musical composition tool.
The thesis begins by exploring the field of brainwave detection and control and identifies the steady-state visually evoked potential (SSVEP) method of eliciting brainwave control as a suitable technique for use in BCMI. In an attempt to improve signal accuracy of the SSVEP technique a new modular hardware unit is presented that provides accurate SSVEP stimuli, suitable for live music performance. Experimental data confirms the performance of the unit in tests across three different EEG hardware platforms. Results across 11 users indicate that a mean accuracy of 96% and an average response time of 3.88 seconds are attainable with the system. These results contribute to the development of the BCMI for Activating Memory, a multi-user system. Once a stable SSVEP platform is developed, control is extended through the integration of two more brainwave control techniques: affective (emotional) state detection and motor imagery response. In order to ascertain the suitability of the former an experiment confirms the accuracy of EEG when measuring affective states in response to music in a pilot study.
This thesis demonstrates how a range of brainwave detection methods can be used for creative control in musical applications. Video and audio excerpts of BCMI pieces are also included in the Appendices
Wie wissenschaftlich muss Musiktheorie sein?. Chancen und Herausforderungen musikalischer Korpusforschung
Korpusbasierte Forschung nimmt in der Sprach- und Literaturwissenschaft schon seit Langem einen wichtigen Platz ein. In der Musikforschung dagegen gewann sie erst vor Kurzem an Bedeutung. Die Gründe für diese verspätete Akzeptanz sind vielfältig und mitunter einer tiefgreifenden Skepsis gegenüber der Anwendung statistisch-quantitativer Methoden auf Musik als Kunstobjekt geschuldet. Der vorliegende Beitrag motiviert musikalische Korpusforschung, indem er grundsätzliche Probleme herkömmlicher Repertoireforschung (intuitive Statistik, methodische Intransparenz, Urteilsheuristiken) und gegenwärtiger Korpusforschung (z.B. Stichprobenerhebung, mangelnde Korpora und Annotationsstandards) aufzeigt und anhand repräsentativer Studien in den Bereichen Harmonik, Kontrapunktik, Melodiebildung und Rhythmik/Metrik exemplarisch diskutiert. Der Beitrag schließt mit einem Plädoyer für die Einbeziehung quantitativer Ansätze in der Musiktheorie im Rahmen eines übergeordneten ›Mixed Methods‹-Paradigmas.
Corpus-based research has long been occupying a prominent position in literary studies and linguistics. In musicology, by contrast, it is about to gain in importance only fairly recently. The reasons for this delayed acceptance are manifold. Among other things, they are rooted in a deep skepticism toward applying statistical-quantitative methods to music as an object of art. This article supports musicological corpus research by pointing out general problems inherent to traditional repertoire research (intuitive statistics, methodological non-transparency, and heuristics in judgment) as well as current corpus research (e.g., biased sampling, paucity of corpora, and lack of annotation standards). These problems are discussed in reference to prominent studies in the domains of harmony, counterpoint, melody, and rhythm/meter. The article concludes by making a case for the integration of quantitative approaches in music theory into the overarching framework of a ›mixed methods‹ paradigm