12 research outputs found

    Instruments for Spatial Sound Control in Real Time Music Performances. A Review

    Get PDF
    The systematic arrangement of sound in space is widely considered as one important compositional design category of Western art music and acoustic media art in the 20th century. A lot of attention has been paid to the artistic concepts of sound in space and its reproduction through loudspeaker systems. Much less attention has been attracted by live-interactive practices and tools for spatialisation as performance practice. As a contribution to this topic, the current study has conducted an inventory of controllers for the real time spatialisation of sound as part of musical performances, and classified them both along different interface paradigms and according to their scope of spatial control. By means of a literature study, we were able to identify 31 different spatialisation interfaces presented to the public in context of artistic performances or at relevant conferences on the subject. Considering that only a small proportion of these interfaces combines spatialisation and sound production, it seems that in most cases the projection of sound in space is not delegated to a musical performer but regarded as a compositional problem or as a separate performative dimension. With the exception of the mixing desk and its fader board paradigm as used for the performance of acousmatic music with loudspeaker orchestras, all devices are individual design solutions developed for a specific artistic context. We conclude that, if controllers for sound spatialisation were supposed to be perceived as musical instruments in a narrow sense, meeting certain aspects of instrumentality, immediacy, liveness, and learnability, new design strategies would be required

    Interactive Sound in Performance Ecologies: Studying Connections among Actors and Artifacts

    Get PDF
    This thesis’s primary goal is to investigate performance ecologies, that is the compound of humans, artifacts and environmental elements that contribute to the result of a per- formance. In particular, this thesis focuses on designing new interactive technologies for sound and music. The goal of this thesis leads to the following Research Questions (RQs): • RQ1 How can the design of interactive sonic artifacts support a joint expression across different actors (composers, choreographers, and performers, musicians, and dancers) in a given performance ecology? • RQ2 How does each different actor influence the design of different artifacts, and what impact does this have on the overall artwork? • RQ3 How do the different actors in the same ecology interact, and appropriate an interactive artifact? To reply to these questions, a new framework named ARCAA has been created. In this framework, all the Actors of a given ecology are connected to all the Artifacts throughout three layers: Role, Context and Activity. This framework is then applied to one systematic literature review, two case studies on music performance and one case study in dance performance. The studies help to better understand the shaded roles of composers, per- formers, instrumentalists, dancers, and choreographers, which is relevant to better design interactive technologies for performances. Finally, this thesis proposes a new reflection on the blurred distinction between composing and designing a new instrument in a context that involves a multitude of actors. Overall, this work introduces the following contributions to the field of interaction design applied to music technology: 1) ARCAA, a framework to analyse the set of inter- connected relationship in interactive (music) performances, validated through 2 music studies, 1 dance study and 1 systematic literature analysis; 2) Recommendations for de- signing music interactive system for performance (music or dance), accounting for the needs of the various actors and for the overlapping on music composition and design of in- teractive technology; 3) A taxonomy of how scores have shaped performance ecologies in NIME, based on a systematic analysis of the literature on score in the NIME proceedings; 4) Proposal of a methodological approach combining autobiographical and idiographical design approaches in interactive performances.O objetivo principal desta tese é investigar as ecologias performativas, conjunto formado pelos participantes humanos, artefatos e elementos ambientais que contribuem para o resultado de uma performance. Em particular, esta tese foca-se na conceção de novas tecnologias interativas para som e música. O objetivo desta tese originou as seguintes questões de investigação (Research Questions RQs): • RQ1 Como o design de artefatos sonoros interativos pode apoiar a expressão con- junta entre diferentes atores (compositores, coreógrafos e performers, músicos e dançarinos) numa determinada ecologia performativa? • RQ2 Como cada ator influencia o design de diferentes artefatos e que impacto isso tem no trabalho artístico global? • RQ3 Como os diferentes atores de uma mesma ecologia interagem e se apropriam de um artefato interativo? Para responder a essas perguntas, foi criado uma nova framework chamada ARCAA. Nesta framework, todos os atores (Actores) de uma dada ecologia estão conectados a todos os artefatos (Artefacts) através de três camadas: Role, Context e Activity. Esta framework foi então aplicada a uma revisão sistemática da literatura, a dois estudos de caso sobre performance musical e a um estudo de caso em performance de dança. Estes estudos aju- daram a comprender melhor os papéis desempenhados pelos compositores, intérpretes, instrumentistas, dançarinos e coreógrafos, o que é relevante para melhor projetar as tec- nologias interativas para performances. Por fim, esta tese propõe uma nova reflexão sobre a distinção entre compor e projetar um novo instrumento num contexto que envolve uma multiplicidade de atores. Este trabalho apresenta as seguintes contribuições principais para o campo do design de interação aplicado à tecnologia musical: 1) ARCAA, uma framework para analisar o conjunto de relações interconectadas em performances interativas, validado através de dois estudos de caso relacionados com a música, um estudo de caso relacionado com a dança e uma análise sistemática da literatura; 2) Recomendações para o design de sistemas interativos musicais para performance (música ou dança), tendo em conta as necessidades dos vários atores e a sobreposição entre a composição musical e o design de tecnologia interactiva; 3) Uma taxonomia sobre como as partituras musicais moldaram as ecologias performativas no NIME, com base numa análise sistemática da literatura dos artigos apresentados e publicados nestas conferência; 4) Proposta de uma aborda- gem metodológica combinando abordagens de design autobiográfico e idiográfico em performances interativas

    Imagining & Sensing: Understanding and Extending the Vocalist-Voice Relationship Through Biosignal Feedback

    Get PDF
    The voice is body and instrument. Third-person interpretation of the voice by listeners, vocal teachers, and digital agents is centred largely around audio feedback. For a vocalist, physical feedback from within the body provides an additional interaction. The vocalist’s understanding of their multi-sensory experiences is through tacit knowledge of the body. This knowledge is difficult to articulate, yet awareness and control of the body are innate. In the ever-increasing emergence of technology which quantifies or interprets physiological processes, we must remain conscious also of embodiment and human perception of these processes. Focusing on the vocalist-voice relationship, this thesis expands knowledge of human interaction and how technology influences our perception of our bodies. To unite these different perspectives in the vocal context, I draw on mixed methods from cog- nitive science, psychology, music information retrieval, and interactive system design. Objective methods such as vocal audio analysis provide a third-person observation. Subjective practices such as micro-phenomenology capture the experiential, first-person perspectives of the vocalists them- selves. Quantitative-qualitative blend provides details not only on novel interaction, but also an understanding of how technology influences existing understanding of the body. I worked with vocalists to understand how they use their voice through abstract representations, use mental imagery to adapt to altered auditory feedback, and teach fundamental practice to others. Vocalists use multi-modal imagery, for instance understanding physical sensations through auditory sensations. The understanding of the voice exists in a pre-linguistic representation which draws on embodied knowledge and lived experience from outside contexts. I developed a novel vocal interaction method which uses measurement of laryngeal muscular activations through surface electromyography. Biofeedback was presented to vocalists through soni- fication. Acting as an indicator of vocal activity for both conscious and unconscious gestures, this feedback allowed vocalists to explore their movement through sound. This formed new perceptions but also questioned existing understanding of the body. The thesis also uncovers ways in which vocalists are in control and controlled by, work with and against their bodies, and feel as a single entity at times and totally separate entities at others. I conclude this thesis by demonstrating a nuanced account of human interaction and perception of the body through vocal practice, as an example of how technological intervention enables exploration and influence over embodied understanding. This further highlights the need for understanding of the human experience in embodied interaction, rather than solely on digital interpretation, when introducing technology into these relationships

    Parallel Gesture Recognition with Soft Real-Time Guarantees

    Get PDF
    Using imperative programming to process event streams, such as those generated by multi-touch devices and 3D cameras, has significant engineering drawbacks. Declarative approaches solve common problems but so far, they have not been able to scale on multicore systems while providing guaranteed response times. We propose PARTE, a parallel scalable complex event processing engine that allows for a declarative definition of event patterns and provides soft real-time guarantees for their recognition. The proposed approach extends the classical Rete algorithm and maps event matching onto a graph of actor nodes. Using a tiered event matching model, PARTE provides upper bounds on the detection latency by relying on a combination of non-blocking message passing between Rete nodes and safe memory management techniques. The performance evaluation shows the scalability of our approach on up to 64 cores. Moreover, it indicates that PARTE's design choices lead to more predictable performance compared to a PARTE variant without soft real-time guarantees. Finally, the evaluation indicates further that gesture recognition can benefit from the exposed parallelism with superlinear speedups

    Beyond key velocity: Continuous sensing for expressive control on the Hammond Organ and Digital keyboards

    Get PDF
    In this thesis we seek to explore the potential for continuous key position to be used as an expressive control in keyboard musical instruments, and how preexisting skills can be adapted to leverage this additional control. Interaction between performer and sound generation on a keyboard instrument is often restricted to a number of discrete events on the keys themselves (notes onsets and offsets), while complementary continuous control is provided via additional interfaces, such as pedals, modulation wheels and knobs. The rich vocabulary of gestures that skilled performers can achieve on the keyboard is therefore often simplified to a single, discrete velocity measurement. A limited number of acoustical and electromechanical keyboard instruments do, however, present affordances of continuous key control, so that the role of the key is not limited to delivering discrete events, but its instantaneous position is, to a certain extent, an element of expressive control. Recent evolutions in sensing technologies allow to leverage continuous key position as an expressive element in the sound generation of digital keyboard musical instruments. We start by exploring the expression available on the keys of the Hammond organ, where nine contacts are closed at different points of the key throw for each key onset and we find that the velocity and the percussiveness of the touch affect the way the contacts close and bounce, producing audible differences in the onset transient of each note. We develop an embedded hardware and software environment for low-latency sound generation controlled by continuous key position, which we use to create two digital keyboard instruments. The first of these emulates the sound of a Hammond and can be controlled with continuous key position, so that it allows for arbitrary mapping between the key position and the nine virtual contacts of the digital sound generator. A study with 10 musicians shows that, when exploring the instrument on their own, the players can appreciate the differences between different settings and tend to develop a personal preference for one of them. In the second instrument, continuous key position is the fundamental means of expression: percussiveness, key position and multi-key gestures control the parameters of a physical model of a flute. In a study with 6 professional musicians playing this instrument we gather insights on the adaptation process, the limitations of the interface and the transferability of traditional keyboard playing techniques

    The Proceedings of the 12th International Congress on Mathematical Education: Intellectual and attitudinal challenges

    Get PDF
    mathematics; education; curriculu

    « Extending interactivity ». Atti del XXI CIM - Colloquio di Informatica Musicale

    Get PDF
    corecore