187 research outputs found

    The Speckled Cellist: Classification of Cello Bowing Styles using the Orient Specks

    Get PDF
    Cello bowing techniques are classified by applying supervised machine learning methods to sensor data from two inertial sensors called the Orient specks – one worn on the playing wrist and the other attached to the frog of the bow. Twelve different bowing techniques were considered, including variants on a single string and across multiple strings. Results are presented for the classification of these twelve techniques when played singly, and in combination during improvisational play. The results demonstrated that even when limited to two sensors, classification accuracy in excess of 95% was obtained for the individual bowing styles, with the added advantages of a minimalist approach

    The composer as technologist : an investigation into compositional process

    Get PDF
    This work presents an investigation into compositional process. This is undertaken where a study of musical gesture, certain areas of cognitive musicology, computer vision technologies and object-orientated programming, provide the basis for a composer (author) to assume the role of a technologist and acquire knowledge and skills to that end. In particular, it focuses on the application and development of a video gesture recognition heuristic to the compositional problems posed. The result is the creation of an interactive musical work with score for violin and electronics that supports the research findings. In addition, the investigative approach into developing technology to solve musical problems that explores practical composition and aesthetic challenges is detailed

    Physical modelling meets machine learning: performing music with a virtual string ensemble

    Get PDF
    This dissertation describes a new method of computer performance of bowed string instruments (violin, viola, cello) using physical simulations and intelligent feedback control. Computer synthesis of music performed by bowed string instruments is a challenging problem. Unlike instruments whose notes originate with a single discrete excitation (e.g., piano, guitar, drum), bowed string instruments are controlled with a continuous stream of excitations (i.e. the bow scraping against the string). Most existing synthesis methods utilize recorded audio samples, which perform quite well for single-excitation instruments but not continuous-excitation instruments. This work improves the realism of synthesis of violin, viola, and cello sound by generating audio through modelling the physical behaviour of the instruments. A string's wave equation is decomposed into 40 modes of vibration, which can be acted upon by three forms of external force: A bow scraping against the string, a left-hand finger pressing down, and/or a right-hand finger plucking. The vibration of each string exerts force against the instrument bridge; these forces are summed and convolved with the instrument body impulse response to create the final audio output. In addition, right-hand haptic output is created from the force of the bow against the string. Physical constants from ten real instruments (five violins, two violas, and three cellos) were measured and used in these simulations. The physical modelling was implemented in a high-performance library capable of simulating audio on a desktop computer one hundred times faster than real-time. The program also generates animated video of the instruments being performed. To perform music with the physical models, a virtual musician interprets the musical score and generates actions which are then fed into the physical model. The resulting audio and haptic signals are examined with a support vector machine, which adjusts the bow force in order to establish and maintain a good timbre. This intelligent feedback control is trained with human input, but after the initial training is completed the virtual musician performs autonomously. A PID controller is used to adjust the position of the left-hand finger to correct any flaws in the pitch. Some performance parameters (initial bow force, force correction, and lifting factors) require an initial value for each string and musical dynamic; these are calibrated automatically using the previously-trained support vector machines. The timbre judgements are retained after each performance and are used to pre-emptively adjust bowing parameters to avoid or mitigate problematic timbre for future performances of the same music. The system is capable of playing sheet music with approximately the same ability level as a human music student after two years of training. Due to the number of instruments measured and the generality of the machine learning, music can be performed with ensembles of up to ten stringed instruments, each with a distinct timbre. This provides a baseline for future work in computer control and expressive music performance of virtual bowed string instruments

    Methods and Technologies for the Analysis and Interactive Use of Body Movements in Instrumental Music Performance

    Get PDF
    List of related publications: http://www.federicovisi.com/publications/A constantly growing corpus of interdisciplinary studies support the idea that music is a complex multimodal medium that is experienced not only by means of sounds but also through body movement. From this perspective, musical instruments can be seen as technological objects coupled with a repertoire of performance gestures. This repertoire is part of an ecological knowledge shared by musicians and listeners alike. It is part of the engine that guides musical experience and has a considerable expressive potential. This thesis explores technical and conceptual issues related to the analysis and creative use of music-related body movements in instrumental music performance. The complexity of this subject required an interdisciplinary approach, which includes the review of multiple theoretical accounts, quantitative and qualitative analysis of data collected in motion capture laboratories, the development and implementation of technologies for the interpretation and interactive use of motion data, and the creation of short musical pieces that actively employ the movement of the performers as an expressive musical feature. The theoretical framework is informed by embodied and enactive accounts of music cognition as well as by systematic studies of music-related movement and expressive music performance. The assumption that the movements of a musician are part of a shared knowledge is empirically explored through an experiment aimed at analysing the motion capture data of a violinist performing a selection of short musical excerpts. A group of subjects with no prior experience playing the violin is then asked to mime a performance following the audio excerpts recorded by the violinist. Motion data is recorded, analysed, and compared with the expert’s data. This is done both quantitatively through data analysis xii as well as qualitatively by relating the motion data to other high-level features and structures of the musical excerpts. Solutions to issues regarding capturing and storing movement data and its use in real-time scenarios are proposed. For the interactive use of motion-sensing technologies in music performance, various wearable sensors have been employed, along with different approaches for mapping control data to sound synthesis and signal processing parameters. In particular, novel approaches for the extraction of meaningful features from raw sensor data and the use of machine learning techniques for mapping movement to live electronics are described. To complete the framework, an essential element of this research project is the com- position and performance of études that explore the creative use of body movement in instrumental music from a Practice-as-Research perspective. This works as a test bed for the proposed concepts and techniques. Mapping concepts and technologies are challenged in a scenario constrained by the use of musical instruments, and different mapping ap- proaches are implemented and compared. In addition, techniques for notating movement in the score, and the impact of interactive motion sensor systems in instrumental music practice from the performer’s perspective are discussed. Finally, the chapter concluding the part of the thesis dedicated to practical implementations describes a novel method for mapping movement data to sound synthesis. This technique is based on the analysis of multimodal motion data collected from multiple subjects and its design draws from the theoretical, analytical, and practical works described throughout the dissertation. Overall, the parts and the diverse approaches that constitute this thesis work in synergy, contributing to the ongoing discourses on the study of musical gestures and the design of interactive music systems from multiple angles

    Multisensory learning in adaptive interactive systems

    Get PDF
    The main purpose of my work is to investigate multisensory perceptual learning and sensory integration in the design and development of adaptive user interfaces for educational purposes. To this aim, starting from renewed understanding from neuroscience and cognitive science on multisensory perceptual learning and sensory integration, I developed a theoretical computational model for designing multimodal learning technologies that take into account these results. Main theoretical foundations of my research are multisensory perceptual learning theories and the research on sensory processing and integration, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models. Finally, a computational model was applied in two case studies, based on two EU ICT-H2020 Projects, "weDRAW" and "TELMI", on which I worked during the PhD

    El sonido sin cuerpo y el sonido re-incorporado: una expansión del cuerpo sónico de los instrumentos

    Get PDF
    The development of recording technologies, audio manipulation techniques, and sound synthesis opened new sonic horizons. At the same time, realising or reproducing these new sounds creates issues of disembodiment and/or a total lack of physical-gesture-to-audio relationship. Understanding the impact these issues have on our perception and comprehension of music becomes central in the light of new creative practices, in which developing hardware and software has become part of the creative process. These creative practices force us to re-think the role of performance and the medium (musical instruments) in the essence of the musical work. Building upon previous research, a set of possible configurations for hyperinstrument design is presented in this article with the aim to introduce novel ways of thinking about the relationship of the physical body of the instrument (resonant body), the sonic body (the acoustic phenomena unfolding in a physical space), and performance.Con el desarrollo de tecnologías de grabación y técnicas de manipulación audio, así como síntesis de audio, aparecieron nuevos horizontes sonoros. Al mismo tiempo, crear o reproducir estos nuevos sonidos invocó al problema de la incorporeidad y/o a un rompimiento en la relación entre el gesto físico y sonido. La comprensión del impacto que esta problemática tiene sobre nuestra percepción y entendimiento de la música se ha vuelto central bajo la luz de las nuevas prácticas creativas en las cuales el desarrollo de hardware y software forman parte del proceso creativo. Estas prácticas creativas nos fuerzan a repensar el rol de la interpretación y el medio (los instrumentos musicales) dentro de la esencia de la obra musical. En base a investigación previa, este articulo presenta un conjunto de posibles configuraciones para el diseño de hiperinstrumentos con el fin de introducir nuevas formas de pensar en la relación entre el cuerpo físico del instrumento (cuerpo sonoro), el cuerpo acústico (el fenómeno acústico manifestado en un espacio) y la interpretación

    Discriminating music performers by timbre: On the relation between instrumental gesture, tone quality and perception in classical cello performance

    Get PDF
    Classical music performers use instruments to transform the symbolic notationof the score into sound which is ultimately perceived by a listener. For acoustic instruments, the timbre of the resulting sound is assumed to be strongly linked to the physical and acoustical properties of the instrument itself. However, rather little is known about how much influence the player has over the timbre of the sound — is it possible to discriminate music performers by timbre? This thesis explores player-dependent aspects of timbre, serving as an individual means of musical expression. With a research scope narrowed to analysis of solo cello recordings, the differences in tone quality of six performers who played the same musical excerpts on the same cello are investigated from three different perspectives: perceptual, acoustical and gestural. In order to understand how the physical actions that a performer exerts on an instrument affect spectro-temporal features of the sound produced, which then can be perceived as the player’s unique tone quality, a series of experiments are conducted, starting with the creation of dedicated multi-modal cello recordings extended by performance gesture information (bowing control parameters). In the first study, selected tone samples of six cellists are perceptually evaluated across various musical contexts via timbre dissimilarity and verbal attribute ratings. The spectro-temporal analysis follows in the second experiment, with the aim to identify acoustic features which best describe varying timbral characteristics of the players. Finally, in the third study, individual combinationsof bowing controls are examined in search for bowing patterns which might characterise each cellist regardless of the music being performed. The results show that the different players can be discriminated perceptually, by timbre, and that this perceptual discrimination can be projected back through the acoustical and gestural domains. By extending current understanding of human-instrument dependencies for qualitative tone production, this research may have further applications in computer-aided musical training and performer-informed instrumental sound synthesis.This work was supported by a UK EPSRC DTA studentship EP/P505054/1 and the EPSRC funded OMRAS2 project EP/E017614/1

    Capture, modeling and recognition of expert technical gestures in wheel-throwing art of pottery

    No full text
    International audienceThis research has been conducted in the context of the ArtiMuse project that aims at the modeling and renewal of rare gestural knowledge and skills involved in the traditional craftsmanship and more precisely in the art of the wheel-throwing pottery. These knowledge and skills constitute the Intangible Cultural Heritage and refer to the fruit of diverse expertise founded and propagated over the centuries thanks to the ingeniousness of the gesture and the creativity of the human spirit. Nowadays, this expertise is very often threatened with disappearance because of the difficulty to resist to globalization and the fact that most of those "expertise holders" are not easily accessible due to geographical or other constraints. In this paper, a methodological framework for capturing and modeling gestural knowledge and skills in wheel-throwing pottery is proposed. It is based on capturing gestures using wireless inertial sensors and statistical modeling. In particular, we used a system that allows for online alignment of gestures using a modified Hidden Markov Model. This methodology is implemented into a Human-Computer Interface, which permits both the modeling and recognition of expert technical gestures. This system could be used to assist in the learning of these gestures by giving continuous feedback in real-time by measuring the difference between expert and learner gestures. The system has been tested and evaluated on different potters with a rare expertise, which is strongly related to their local identity

    An investigation of audio signal-driven sound synthesis with a focus on its use for bowed stringed synthesisers

    Get PDF
    This thesis proposes an alternative approach to sound synthesis. It seeks to offer traditional string players a synthesiser which will allow them to make use of their existing skills in performance. A theoretical apparatus reflecting on the constraints of formalisation is developed and used to shed light on construction-related shortcomings in the instrumental developments of related research. Historical aspects and methods of sound synthesis, and the act of musical performance, are addressed with the aim of drawing conclusions for the construction of algorithms and interfaces. The alternative approach creates an openness and responsiveness in the synthesis instrument by using implicit playing parameters without the necessity to define, specify or measure all of them. In order to investigate this approach, several synthesis algorithms are developed, sounds are designed and a selection of them empirically compared to conventionally synthesised sounds. The algorithms are used in collaborative projects with other musicians in order to examine their practical musical value. The results provide evidence that implementations using the approach presented can offer musically significant differences as compared to similarly complex conventional implementations, and that - depending on the disposition of the musician - they can form a valuable contribution to the sound repertoire of performers and composers
    • …
    corecore