21 research outputs found

    Analysis of ensemble expressive performance in string quartets: a statistical and machine learning approach

    No full text
    Computational approaches for modeling expressive music performance have produced systems that emulate human expression, but few steps have been taken in the domain of ensemble performance. Polyphonic expression and inter-dependence among voices are intrinsic features of ensemble performance and need to be incorporated at the very core of the models. For this reason, we proposed a novel methodology for building computational models of ensemble expressive performance by introducing inter-voice contextual attributes (extracted from ensemble scores) and building separate models of each individual performer in the ensemble. We focused our study on string quartets and recorded a corpus of performances both in ensemble and solo conditions employing multi-track recording and bowing motion acquisition techniques. From the acquired data we extracted bowed-instrument-specific expression parameters performed by each musician. As a preliminary step, we investigated over the difference between solo and ensemble from a statistical point of view and show that the introduced inter-voice contextual attributes and extracted expression are statistically sound. In a further step, we build models of expression by training machine-learning algorithms on the collected data. As a result, the introduced inter-voice contextual attributes improved the prediction of the expression parameters. Furthermore, results on attribute selection show that the models trained on ensemble recordings took more advantage of inter-voice contextual attributes than those trained on solo recordings. The obtained results show that the introduced methodology can have applications in the analysis of collaboration among musicians.L’estudi de l’expressivitat musical ha produït models computacionals capaços d’emular l’expressivitat humana, però aquests models encara no es poden aplicar al cas dels conjunts musicals. Per estudiar l’expressivitat dels conjunts musicals, s’han de tenir en compte dues característiques principals: l’expressió polifònica i la interdependència entre veus. Per aquesta raó, proposem una nova metodologia que es basa en la introducció d’una sèrie d’atributs intervocals, que hem extret de la partitura, que es poden utilitzar per construir models d’expressivitat individuals per a cada un dels músics. Hem col•leccionat un conjunt de peces musicals a partir de l’enregistrament multipista i de la captura de moviments d’un quartet de cordes, en un corpus que recull peces concretes tocades tant en grup com individualment. D’aquestes dades, hem extret diversos paràmetres que descriuen l’expressivitat per a cada un dels músics d’un conjunt de corda. El primer pas ha estat estudiar, des d’un punt de vista estadístic, les diferències entre l’actuació d’una mateixa peça tant en solitari com en grup. Després, hem estudiat les relacions estadístiques entre els atributs intervocals i els paràmetres d’expressivitat. A continuació, hem construït models d’expressivitat a partir de la utilització d’algoritmes d’aprenentatge automàtic amb les dades col•leccionades. Com a resultat, els atributs intervocals que hem proposat han millorat la predicció del paràmetres d’expressivitat. Hem pogut demostrar com aquests models que han après d’actuacions en grup utilitzen més atributs intervocals que aquells que ho han fet d’actuacions en solitari. Aquests resultats mostren que la metodologia i models introduïts es poden aplicar en l’anàlisi de la col•laboració entre membres d’un conjunt musical.El estudio de la expresividad musical ha producido modelos computacionales capaces de emular la expresividad humana, pero estos modelos todavía no se pueden aplicar al caso de los conjuntos musicales. Para estudiar la expresividad de los conjuntos musicales, se deben tener en cuenta dos características principales: la expresión polifónica y la interdependencia entre voces. Por esta razón, proponemos una nueva metodología que se basa en la introducción de una serie de atributos intervocales, que hemos extraído de la partitura, que se pueden utilizar para construir modelos de expresividad individuales para cada uno de los músicos. Hemos coleccionado un conjunto de piezas musicales a partir de la grabación multipista y de la captura de movimientos de un cuarteto de cuerdas, en un corpus que recoge piezas concretas tocadas tanto en grupo como individualmente. De estos datos, hemos extraído varios parámetros que describen la expresividad para cada uno de los músicos de un conjunto de cuerdas. El primer paso ha sido estudiar, desde un punto de vista estadístico, las diferencias entre la actuación de una misma pieza tanto en solitario como en grupo. Después, hemos estudiado las relaciones estadísticas entre los atributos intervocales y los parámetros de expresividad. A continuación, hemos construido modelos de expresividad a partir de la utilización de algoritmos de aprendizaje automático con los datos coleccionados. Como resultado, los atributos intervocales que hemos propuesto han mejorado la predicción de los parámetros de expresividad. Hemos podido demostrar cómo estos modelos que han aprendido de actuaciones en grupo utilizan más atributos intervocales que aquellos que lo han hecho de actuaciones en solitario. Estos resultados muestran que la metodología y modelos introducidos se pueden aplicar en el análisis de la colaboración entre miembros de un conjunto musical

    Automatic phrase continuation from guitar and bass guitar melodies

    No full text
    A framework is proposed for generating interesting, musically similar variations of a given monophonic melody. The focus is on pop/rock guitar and bass guitar melodies with the aim of eventual extensions to other instruments and musical styles. It is demonstrated here how learning musical style from segmented audio data can be formulated as an unsupervised learning problem to generate a symbolic representation. A melody is first segmented into a sequence of notes using onset detection and pitch estimation. A set of hierarchical, coarse-to-fine symbolic representations of the melody is generated by clustering pitch values at multiple similarity thresholds. The variance ratio criterion is then used to select the appropriate clustering levels in the hierarchy. Note onsets are aligned with beats, considering the estimated meter of the melody, to create a sequence of symbols that represent the rhythm in terms of onsets/rests and the metrical locations of their occurrence. A joint representation based on the cross-product of the pitch cluster indices and metrical locations is used to train the prediction model, a variable-length Markov chain. The melodies generated by the model were evaluated through a questionnaire by a group of experts, and received an overall positive response

    Investigating the relationship between expressivity and synchronization in ensemble performance: an exploratory study

    No full text
    Comunicació presentada al International Symposium on Performance Science (ISPS), celebrat a Viena (Àustria) els dies 28 a 31 d'agost de 2013.We present an exploratory study on ensemble expressive performance based on the analysis of string quartet recordings. We recorded a piece with three expressive intentions: mechanical, normal, and exaggerated. We made use of bowing gesture data (bow velocity and force) acquired through a motion tracking system to devise a precise score performance alignment. Individual contact microphone audio signals allowed extraction of a set of audio descriptors for each musician and each note. We show how tempo and loudness on a macro-scale changed across expressive intentions and score sections. The score is also taken into account in the analysis by extracting contextual attributes for each note. We show that micro-deviations were affected by note contextual attributes, whereas the effect of expressive intention varied across sections. We find sections that exhibited a lower entrainment, where individual parts tended to be freer and presented more asynchronies.This work was supported by EU FET-Open SIEMPRE and by SIEMPRE-MAS4

    Multidimensional analysis of interdependence in a string quartet

    No full text
    Comunicació presentada al International Symposium on Performance Science (ISPS), celebrat a Viena (Àustria) els dies 28 a 31 d'agost de 2013.In a musical ensemble such as a string quartet, the performers can influence each other’s actions in several aspects of the performance simultaneously. Based on a set of recorded string quartet exercises, we carried out a quantitative analysis of ensemble interdependence in four distinct dimensions of the performance: dynamics, intonation, tempo, and timbre. We investigated the fluctuations of interdependence across these four dimensions, and in relation to the exercise being performed. Our findings suggest that, although certain differences can be observed between the four dimensions, the most influential factor on ensemble interdependence is the musical task, shaped by the underlying score.The work presented on this document has been partially supported by the EU-FP7 FET SIEMPRE project and an AGAUR research grant from Generalitat de Catalunya

    Computational analysis of solo versus ensemble performance in string quartets: intonation and dynamics

    No full text
    Comunicació presentada a la conferència conjunta que inclou la 12th International Conference on Music Perception and Cognition (ICMPC) i la 8th Triennial Conference of the European Society, celebrada a Tessalònica (Grècia) els dies 23 a 28 de juliol de 2012.Musical ensembles, such as a string quartet, are a clear case of music performance where a joint interpretation of the score as well as joint action during the performance is required by the musicians. Of the several explicit and implicit ways through which the musicians cooperate, we focus on the acoustic result of the performance – in this case in terms of dynamics and intonation - and attempt to detect evidence of interdependence among the musicians by performing a computational analysis. We have recorded a set of string quartet exercises whose challenge lies in achieving ensemble cohesion rather than correctly performing one’s individual task successfully, which serve as a ‘ground truth’ dataset; these exercises were recorded by a professional string quartet in two experimental conditions: solo, where each musician performs their part alone without having access to the full quartet score, and ensemble, where the musicians perform the exercise together following a short rehearsal period. Through an automatic analysis and post-processing of audio and motion capture data, we extract a set of low-level features, on which we apply several numerical methods of interdependence (such as Pearson correlation, Mutual Information, Granger causality, and Nonlinear coupling) in order to measure the interdependence -or lack thereofamong the musicians during the performance. Results show that, although dependent on the underlying musical score, this methodology can be used in order to automatically analyze the performance of a musical ensemble.The work presented on this document has been partially supported by the EU-FP7 FET SIEMPRE project and an AGAUR research grant from Generalitat de Catalunya

    Timing synchronization in string quartet performance: a preliminary study

    No full text
    This work presents a preliminary study of timing synchronization/nphenomena in string quartet performance. Accurate timing information/nextracted from real recordings is used to compare timing deviations/nin solo and ensemble performance when executing a simple musical/npassage. Multi-modal data is acquired from real performance and/nprocessed towards obtaining note-level segmentation of recorded performances./nFrom such segmentation, a series of timing deviation analyses/nare carried out at two different temporal levels, focusing on the exploration/nof significant differences between solo and ensemble performances./nThis paper briefly introduces, via an initial exploratory study, the experimental/nframework on which further, more complete analyses are to be/ncarried out with the aim of observing and describing certain synchronization/nphenomena taking place in ensemble music making.Papiotis, Panagioti

    Multidimensional analysis of interdependence in a string quartet

    No full text
    Comunicació presentada al International Symposium on Performance Science (ISPS), celebrat a Viena (Àustria) els dies 28 a 31 d'agost de 2013.In a musical ensemble such as a string quartet, the performers can influence each other’s actions in several aspects of the performance simultaneously. Based on a set of recorded string quartet exercises, we carried out a quantitative analysis of ensemble interdependence in four distinct dimensions of the performance: dynamics, intonation, tempo, and timbre. We investigated the fluctuations of interdependence across these four dimensions, and in relation to the exercise being performed. Our findings suggest that, although certain differences can be observed between the four dimensions, the most influential factor on ensemble interdependence is the musical task, shaped by the underlying score.The work presented on this document has been partially supported by the EU-FP7 FET SIEMPRE project and an AGAUR research grant from Generalitat de Catalunya

    Investigating the relationship between expressivity and synchronization in ensemble performance: an exploratory study

    No full text
    Comunicació presentada al International Symposium on Performance Science (ISPS), celebrat a Viena (Àustria) els dies 28 a 31 d'agost de 2013.We present an exploratory study on ensemble expressive performance based on the analysis of string quartet recordings. We recorded a piece with three expressive intentions: mechanical, normal, and exaggerated. We made use of bowing gesture data (bow velocity and force) acquired through a motion tracking system to devise a precise score performance alignment. Individual contact microphone audio signals allowed extraction of a set of audio descriptors for each musician and each note. We show how tempo and loudness on a macro-scale changed across expressive intentions and score sections. The score is also taken into account in the analysis by extracting contextual attributes for each note. We show that micro-deviations were affected by note contextual attributes, whereas the effect of expressive intention varied across sections. We find sections that exhibited a lower entrainment, where individual parts tended to be freer and presented more asynchronies.This work was supported by EU FET-Open SIEMPRE and by SIEMPRE-MAS4

    Computational analysis of solo versus ensemble performance in string quartets: intonation and dynamics

    No full text
    Comunicació presentada a la conferència conjunta que inclou la 12th International Conference on Music Perception and Cognition (ICMPC) i la 8th Triennial Conference of the European Society, celebrada a Tessalònica (Grècia) els dies 23 a 28 de juliol de 2012.Musical ensembles, such as a string quartet, are a clear case of music performance where a joint interpretation of the score as well as joint action during the performance is required by the musicians. Of the several explicit and implicit ways through which the musicians cooperate, we focus on the acoustic result of the performance – in this case in terms of dynamics and intonation - and attempt to detect evidence of interdependence among the musicians by performing a computational analysis. We have recorded a set of string quartet exercises whose challenge lies in achieving ensemble cohesion rather than correctly performing one’s individual task successfully, which serve as a ‘ground truth’ dataset; these exercises were recorded by a professional string quartet in two experimental conditions: solo, where each musician performs their part alone without having access to the full quartet score, and ensemble, where the musicians perform the exercise together following a short rehearsal period. Through an automatic analysis and post-processing of audio and motion capture data, we extract a set of low-level features, on which we apply several numerical methods of interdependence (such as Pearson correlation, Mutual Information, Granger causality, and Nonlinear coupling) in order to measure the interdependence -or lack thereofamong the musicians during the performance. Results show that, although dependent on the underlying musical score, this methodology can be used in order to automatically analyze the performance of a musical ensemble.The work presented on this document has been partially supported by the EU-FP7 FET SIEMPRE project and an AGAUR research grant from Generalitat de Catalunya

    The Sense of ensemble: a machine learning approach to expressive performance modelling in string quartets

    No full text
    Computational approaches for modelling expressive music performance have produced systems that emulate music expression, but few steps have been taken in the domain of ensemble performance. In this paper, we propose a novel method for building computational models of ensemble expressive performance and show how this method can be applied for deriving new insights about collaboration among musicians. In order to address the problem of inter-dependence among musicians we propose the introduction of inter-voice contextual attributes. We evaluate the method on data extracted from multi-modal recordings of string quartet performances in two different conditions: solo and ensemble. We used machine-learning algorithms to produce computational models for predicting intensity, timing deviations, vibrato extent, and bowing speed of each note. As a result, the introduced inter-voice contextual attributes generally improved the prediction of the expressive parameters. Furthermore, results on attribute selection show that the models trained on ensemble recordings took more advantage of inter-voice contextual attributes than those trained on solo recordings.This work was partially supported by the EU FP7 FET-Open SIEMPRE Project (FP7-ICT- 2009-C-250026), by the Spanish TIN project DRIMS (TIN2009-14274-C02-01), and by the Catalan Research Funding Agency AGAUR
    corecore