7 research outputs found

    Custom Controllers and Physical Models as Enablers of Communal Performance in Two Fragments on Water and Light

    Get PDF
    Two Fragments on Water and Light explores a communal ensemble paradigm made possible through the implementation of customized technologies. The work is for solo voice and two additional performers who use controllers built, and in some cases designed, by the composer. These controllers operate either synthesis or effects processing algorithms which generate sound, modify existing sounds, or both. The arbitrary mapping between controller data and synthesis algorithm and the way that some of the synthesis algorithms function as both sound generators and sound processors allow multiple performers to create or modify the same sound. This permits the possibility of a communal performance environment in which the sonic identity of each performer, or the way in which the performer's physical actions directly translate into sonic result, blur into a common, ensemble sonic identity. This document shows how technology enables this communal ensemble paradigm. It first discusses the operation of the physical models and controllers. It illustrates specifically how the use of technology allows for the dissolution of the sonic identity of each performer. This document then explains how technology and the performance environment it facilitates are used to highlight themes seen in the medieval texts set in these songs. After a few remarks evaluating the effectiveness of the songs, I present a performance score

    Physical Modeling Modular Boxes: PHOXES

    Get PDF

    Timbre perception of cochlear implant users

    Get PDF
    The timbre perception of cochlear implantees (CI) is poor compared to normal hearing (NH) listeners. The cues that are normally transmitted to NH listeners may be less salient or even absent for CI users. From the literature, two spectral (brightness (Tb) and irregularity (IRR)) and two temporal timbre parameters (log rise-time (LRT) and sustain/decay (S/D) parameter (n)) have been identified as important timbre parameters. Each of these parameters was extracted for a set of thirteen instruments. Sounds could be resynthesized according to the specific timbre parameter set. The variation of loudness, pitch and perceived duration as functions of the timbre parameters were investigated to provide systematic balancing methods. The just-noticeable differences (JNDs) were obtained for each of the parameters for thirteen instruments for NH listeners and a reduced instrument set of nine instruments for the CI users using a 1-up, 2-down, two-alternative, forced choice procedure. From the JNDs, predicted confusion matrices were constructed. From the confusion matrices, a feature information transmission analysis (FITA) indicated the salience of each of the parameters and NH and CI results could be compared.Dissertation (MEng)--University of Pretoria, 2011.Electrical, Electronic and Computer EngineeringMEngUnrestricte

    Chanter avec les mains : interfaces chironomiques pour les instruments de musique numériques

    Get PDF
    This thesis deals with the real-time control of singing voice synthesis by a graphic tablet, based on the digital musical instrument Cantor Digitalis.The relevance of the graphic tablet for the intonation control is first considered, showing that the tablet provides a more precise pitch control than real voice in experimental conditions.To extend the accuracy of control to any situation, a dynamic pitch warping method for intonation correction is developed. It enables to play under the pitch perception limens preserving at the same time the musician's expressivity. Objective and perceptive evaluations validate the method efficiency.The use of new interfaces for musical expression raises the question of the modalities implied in the playing of the instrument. A third study reveals a preponderance of the visual modality over the auditive perception for the intonation control, due to the introduction of visual clues on the tablet surface. Nevertheless, this is compensated by the expressivity allowed by the interface.The writing or drawing ability acquired since early childhood enables a quick acquisition of an expert control of the instrument. An ensemble of gestures dedicated to the control of different vocal effects is suggested.Finally, an intensive practice of the instrument is made through the Chorus Digitalis ensemble, to test and promote our work. An artistic research has been conducted for the choice of the Cantor Digitalis' musical repertoire. Moreover, a visual feedback dedicated to the audience has been developed, extending the perception of the players' pitch and articulation.Le travail de cette thèse porte sur l'étude du contrôle en temps réel de synthèse de voix chantée par une tablette graphique dans le cadre de l'instrument de musique numérique Cantor Digitalis.La pertinence de l'utilisation d'une telle interface pour le contrôle de l'intonation vocale a été traitée en premier lieu, démontrant que la tablette permet un contrôle de la hauteur mélodique plus précis que la voix réelle en situation expérimentale.Pour étendre la justesse du jeu à toutes situations, une méthode de correction dynamique de l'intonation a été développée, permettant de jouer en dessous du seuil de perception de justesse et préservant en même temps l'expressivité du musicien. Des évaluations objective et perceptive ont permis de valider l'efficacité de cette méthode.L'utilisation de nouvelles interfaces pour la musique pose la question des modalités impliquées dans le jeu de l'instrument. Une troisième étude révèle une prépondérance de la perception visuelle sur la perception auditive pour le contrôle de l'intonation, due à l'introduction d'indices visuels sur la surface de la tablette. Néanmoins, celle-ci est compensée par l'important pouvoir expressif de l'interface.En effet, la maîtrise de l'écriture ou du dessin dès l'enfance permet l'acquisition rapide d'un contrôle expert de l'instrument. Pour formaliser ce contrôle, nous proposons une suite de gestes adaptés à différents effets musicaux rencontrés dans la musique vocale. Enfin, une pratique intensive de l'instrument est réalisée au sein de l'ensemble Chorus Digitalis à des fins de test et de diffusion. Un travail de recherche artistique est conduit tant dans la mise en scène que dans le choix du répertoire musical à associer à l'instrument. De plus, un retour visuel dédié au public a été développé, afin d'aider à la compréhension du maniement de l'instrument

    Complex musical behaviours via time-variant audio feedback networks and distributed adaptation: a study of autopoietic infrastructures for real-time performance systems

    Get PDF
    The research project presented here is a study of the application of complex adaptive systems (CASes) for live music performance and composition by fully autonomous or semi-autonomous machines. The fundamental artistic concept and the motivating idea behind this project are that complex systems are an optimal model for creative music practice as they operate at the edge of chaos: that is, a condition where there is an interplay between order and disorder, or patterns and surprise. Arguably, this is an essential element found in music regardless of its genre or style. The central research questions addressed by this project are: how to realise music systems with an abstract yet structurally coherent and contextually complex output that display organicity and resemble aliveness? How to design music organisms so that artificial expressiveness and formal developments emerge and generate musical behaviours that contribute to the ongoing exploration at the edges of new music practices? How to create audio networks that are responsible for their structure and organisation where a substantial autonomy expands the paradigm of human-machine interaction? The methodology used in this project for the implementation of the music systems applies theories from complexity and adaptation, biology, and philosophy within nonlinear time-variant self-modulating feedback networks. The structural coupling between system and performer is realised by following a cybernetic approach in human-machine interaction and human-machine interfacing. The music systems developed for the creative works in this research are a combination of interdependent algorithms for the processing of information and the synthesis of sound and music. A technique formulated to improve the complexity of music systems is referred to as distributed adaptation, related to the notion of evolvability in biology and genetic algorithms. Distributed adaptation consists of making the adaptation infrastructure itself adaptive and time-variant by employing emergent sensing mechanisms for the generation of information signals, and emergent mapping functions between information signals and state variables. This framework realises the idea for which information and information processing recursively determine each other in a radical constructivist fashion with the important consequence that the machine ultimately constructs its reality as a self-sensing, self-performing, and context-dependent entity. This research includes seven music performances, each implementing CASes with or without human intervention. Also included is a library of original software algorithms for low-level and high-level information processing written in Faust. Chronologically ordered, the performances depict the progress of the study, starting with systems having basic adaptive characteristics and eventually revealing the more advanced ones where the distributed adaptation design is applied. Through self-reflection and post-hoc analysis, case studies illustrate that the combination of CASes and cybernetic performance and interfacing, and particularly distributed adaptation systems with or without human agents, produce a sophisticated musical output whose evolutions are complex, coherent and expressive. These results suggest that the emergent behaviours of such systems can be deployed as a means for the exploration of new music in practice. Furthermore, the autonomy and contextual nature of these systems suggest that promising results can be achieved when applying them to other fields involving audio, especially where interactivity with the surrounding environment is crucial, or when using them as musical instruments for users with special needs
    corecore