5 research outputs found

    Knitting music and programming: Reflections on the frontiers of source code analysis

    Get PDF
    Source Code Analysis and Manipulation (SCAM) underpins virtually every operational software system. Despite the impact and ubiquity of SCAM principles and techniques in software engineering, there are still frontiers to be explored. Looking "inward" to existing techniques, one finds frontiers of performance, efficiency, accuracy, and usability, looking "outward" one finds new languages, new problems, and thus new approaches. This paper presents a reflective framework for characterizing source languages and domains. It draws on current research projects in music program analysis, musical score processing, and machine knitting to identify new frontiers for SCAM. The paper also identifies opportunities for SCAM to inspire, and be inspired by, problems and techniques in other domains

    Rethinking Interaction: Identity and Agency in the Performance of “Interactive” Electronic Music

    Get PDF
    This document investigates interaction between human performers and various interactive technologies in the performance of interactive electronic and computer music. Specifically, it observes how the identity and agency of the interactive technology is experienced and perceived by the human performer. First, a close examination of George Lewis’ creation of and performance with his own historic interactive electronic and computer works reveals his disposition of interaction as improvisation. This disposition is contextualized within then contemporary social and political issues related to African American experimental musicians as well as an emerging culture of electronic and computer musicians concerned with interactivity. Second, an auto-ethnographic study reveals a contemporary performers perspective via the author’s own direct interactive experience with electronic and computer systems. These experiences were documented and analyzed using Actor Network Theory, Critical Technical Practice, theories of Embodiment and Embodied Cognition, Lewis’s conceptions of improvisation, as well as Tracy McMullen’s theory of the Improvisative. Analyses from both studies revealed that when and how performers chose to “other” interactive technologies significantly influenced their actions. The implications of this are discussed in terms of identity formation both within performances of interactive electronic music and interactive technologies generally

    Computational Models of Expressive Music Performance: A Comprehensive and Critical Review

    Get PDF
    Expressive performance is an indispensable part of music making. When playing a piece, expert performers shape various parameters (tempo, timing, dynamics, intonation, articulation, etc.) in ways that are not prescribed by the notated score, in this way producing an expressive rendition that brings out dramatic, affective, and emotional qualities that may engage and affect the listeners. Given the central importance of this skill for many kinds of music, expressive performance has become an important research topic for disciplines like musicology, music psychology, etc. This paper focuses on a specific thread of research: work on computational music performance models. Computational models are attempts at codifying hypotheses about expressive performance in terms of mathematical formulas or computer programs, so that they can be evaluated in systematic and quantitative ways. Such models can serve at least two purposes: they permit us to systematically study certain hypotheses regarding performance; and they can be used as tools to generate automated or semi-automated performances, in artistic or educational contexts. The present article presents an up-to-date overview of the state of the art in this domain. We explore recent trends in the field, such as a strong focus on data-driven (machine learning) approaches; a growing interest in interactive expressive systems, such as conductor simulators and automatic accompaniment systems; and an increased interest in exploring cognitively plausible features and models. We provide an in-depth discussion of several important design choices in such computer models, and discuss a crucial (and still largely unsolved) problem that is hindering systematic progress: the question of how to evaluate such models in scientifically and musically meaningful ways. From all this, we finally derive some research directions that should be pursued with priority, in order to advance the field and our understanding of expressive music performance

    INTERACTIVE SONIFICATION STRATEGIES FOR THE MOTION AND EMOTION OF DANCE PERFORMANCES

    Get PDF
    The Immersive Interactive SOnification Platform, or iISoP for short, is a research platform for the creation of novel multimedia art, as well as exploratory research in the fields of sonification, affective computing, and gesture-based user interfaces. The goal of the iISoP’s dancer sonification system is to “sonify the motion and emotion” of a dance performance via musical auditory display. An additional goal of this dissertation is to develop and evaluate musical strategies for adding layer of emotional mappings to data sonification. The result of the series of dancer sonification design exercises led to the development of a novel musical sonification framework. The overall design process is divided into three main iterative phases: requirement gathering, prototype generation, and system evaluation. For the first phase help was provided from dancers and musicians in a participatory design fashion as domain experts in the field of non-verbal affective communication. Knowledge extraction procedures took the form of semi-structured interviews, stimuli feature evaluation, workshops, and think aloud protocols. For phase two, the expert dancers and musicians helped create test-able stimuli for prototype evaluation. In phase three, system evaluation, experts (dancers, musicians, etc.) and novice participants were recruited to provide subjective feedback from the perspectives of both performer and audience. Based on the results of the iterative design process, a novel sonification framework that translates motion and emotion data into descriptive music is proposed and described
    corecore