8 research outputs found
Machine Learning of Musical Gestures: Principles and Review
We present an overview of machine learning (ML) techniques
and their application in interactive music and new
digital instrument design. We first provide the non-specialist
reader an introduction to two ML tasks, classification and
regression, that are particularly relevant for gestural interaction.
We then present a review of the literature in current
NIME research that uses ML in musical gesture analysis
and gestural sound control. We describe the ways in which
machine learning is useful for creating expressive musical interaction,
and in turn why live music performance presents
a pertinent and challenging use case for machine learning
Musical Applications of Signal Processing: Synthesis and Prospect
This article aims at providing a synthesis of the musical applications of digital signal processing, of related research
issues, and of future directions that emerge from recent works in that field. After introducing preliminary notions related
to the music technical system and to the analysis of different digital representations of music information, it focuses on
three main function types: audio synthesis and processing, sound spatialization and audio indexing and access technologies.Lâobjet de cet article est de proposer une synthĂšse des applications musicales du traitement de signal, des
problématiques de recherche qui leur sont liées et des directions prospectives qui se dégagent sur la base de
travaux rĂ©cents dans ce domaine. AprĂšs lâexposĂ© de notions prĂ©liminaires, relatives au systĂšme technique
musical et Ă lâanalyse des diffĂ©rentes reprĂ©sentations numĂ©riques des informations musicales, cette synthĂšse
se concentre sur trois types de fonctions principales : la synthĂšse et le traitement des sons musicaux,
la spatialisation sonore et les technologies dâindexation et dâaccĂšs
Effort in gestural interactions with imaginary objects in Hindustani Dhrupad vocal music
Physical effort has often been regarded as a key factor of expressivity in music performance. Nevertheless, systematic experimental approaches to the subject have been rare. In North Indian classical (Hindustani) vocal music, singers often engage with melodic ideas during improvisation by manipulating intangible, imaginary objects with their hands, such as through stretching, pulling, pushing, throwing etc. The above observation suggests that some patterns of change in acoustic features allude to interactions that real objects through their physical properties can afford. The present study reports on the exploration of the relationships between movement and sound by accounting for the physical effort that such interactions require in the Dhrupad genre of Hindustani vocal improvisation.
The work follows a mixed methodological approach, combining qualitative and quantitative methods to analyse interviews, audio-visual material and movement data. Findings indicate that despite the flexibility in the way a Dhrupad vocalist might use his/her hands while singing, there is a certain degree of consistency by which performers associate effort levels with melody and types of gestural interactions with imaginary objects. However, different schemes of cross-modal associations are revealed for the vocalists analysed, that depend on the pitch space organisation of each particular melodic mode (rÄga), the mechanical requirements of voice production, the macro-structure of the ÄlÄp improvisation and morphological cross-domain analogies. Results further suggest that a good part of the variance in both physical effort and gesture type can be explained through a small set of sound and movement features. Based on the findings, I argue that gesturing in Dhrupad singing is guided by: the know-how of humans in interacting with and exerting effort on real objects of the environment, the movementâsound relationships transmitted from teacher to student in the oral music training context and the mechanical demands of vocalisation
Technological Support for Highland Piping Tuition and Practice
This thesis presents a complete hardware and software system to support the
learning process associated with the Great Highland Bagpipe (GHB). A digital
bagpipe chanter interface has been developed to enable accurate measurement
of the player's nger movements and bag pressure technique, allowing detailed
performance data to be captured and analysed using the software components
of the system.
To address the challenge of learning the diverse array of ornamentation techniques
that are a central aspect of Highland piping, a novel algorithm is presented
for the recognition and evaluation of a wide range of embellishments
performed using the digital chanter. This allows feedback on the player's execution
of the ornaments to be generated. The ornament detection facility is
also shown to be e ective for automatic transcription of bagpipe notation, and
for performance scoring against a ground truth recording in a game interface,
Bagpipe Hero.
A graphical user interface (GUI) program provides facilities for visualisation,
playback and comparison of multiple performances, and for automatic detection
and description of piping-speci c ngering and ornamentation errors. The development
of the GUI was informed by feedback from expert pipers and a small-scale
user study with students. The complete system was tested in a series of studies
examining both lesson and solo practice situations. A detailed analysis of these
sessions was conducted, and a range of usage patterns was observed in terms of
how the system contributed to the di erent learning environments.
This work is an example of a digital interface designed to connect to a long
established and highly formalised musical style. Through careful consideration
of the speci c challenges faced in teaching and learning the bagpipes, this thesis
demonstrates how digital technologies can provide a meaningful contribution to
even the most conservative cultural traditions.This work was funded by the Engineering and Physical Sciences Research Council
(EPSRC) as part of the Doctoral Training Centre in Media and Arts Technology
at Queen Mary University of London (ref: EP/G03723X/1)
Gesture Analysis of Violin Bow Strokes
cote interne IRCAM: Rasamimanana06aNone / NoneNational audienceWe developed an âaugmented violinâ, i.e. an acoustic instrument with added gesture capture capabilities to control electronic processes. We report here gesture analysis we performed on three different bow strokes, detache, martele and spiccato, using this augmented violin. Different features based on velocity and acceleration were considered. A linear discriminant analysis has been performed to estimate a minimum number of pertinent features necessary to model these bow stroke classes. We found that the maximum and minimum accelerations of a given stroke were efficient to parameterize the different bow stroke types, as well as differences in dynamics playing. Recognition rates were estimated using a kNN method with various training sets. We finally discuss that bow stroke recognition allows to relate the gesture data to music notation, while a bow stroke continuous parameterization can be related to continuous sound characteristics
Gesture Analysis of Violin Bow Strokes (abstract)
cote interne IRCAM: Rasamimanana05aNone / NoneNational audienceGesture Analysis of Violin Bow Strokes (abstract