20 research outputs found

    Tangible Distributed Computer Music for Youth

    Get PDF
    Computer music research realizes a vision of performance by means of computational expression, linking body and space to sound and imagery through eclectic forms of sensing and interaction. This vision could dramatically impact computer science education, simultaneously modernizing the field and drawing in diverse new participants. In this article, we describe our work creating an interactive computer music toolkit for kids called BlockyTalky. This toolkit enables users to create networks of sensing devices and synthesizers, and to program the musical and interactive behaviors of these devices. We also describe our work with two middle school teachers to co-design and deploy a curriculum for 11- to 13-year-old students. We draw on work with these students to evidence how computer music can support learning about computer science concepts and change students’ perceptions of computing. We conclude by outlining some remaining questions around how computer music and computer science may best be linked to provide transformative educational experiences

    Mapping Through Listening

    Get PDF
    Gesture-to-sound mapping is generally defined as the association between gestural and sound parameters. This article describes an approach that brings forward the perception-action loop as a fundamental design principle for gesture–sound mapping in digital music instrument. Our approach considers the processes of listening as the foundation – and the first step – in the design of action-sound relationships. In this design process, the relationship between action and sound is derived from actions that can be perceived in the sound. Building on previous works on listening modes and gestural descriptions we proposed to distinguish between three mapping strategies: instantaneous, temporal, and metaphoric. Our approach makes use of machine learning techniques for building prototypes, from digital music instruments to interactive installations. Four different examples of scenarios and prototypes are described and discussed

    Active Learning of Intuitive Control Knobs for Synthesizers Using Gaussian Processes

    No full text
    Typical synthesizers only provide controls to the low-level parameters of sound-synthesis, such as wave-shapes or filter envelopes. In contrast, composers often want to adjust and express higher-level qualities, such as how ‘scary ’ or ‘steady’ sounds are perceived to be. We develop a system which allows users to directly control abstract, high-level qualities of sounds. To do this, our system learns functions that map from synthesizer control settings to perceived levels of high-level qualities. Given these functions, our system can generate high-level knobs that directly adjust sounds to have more or less of those qualities. We model the functions mapping from control-parameters to the degree of each high-level quality using Gaussian processes, a nonparametric Bayesian model. These models can adjust to the complexity of the function being learned, account for nonlinear interaction between control-parameters, and allow us to characterize the uncertainty about the functions being learned. By tracking uncertainty about the functions being learned, we can use active learning to quickly calibrate the tool, by querying the user about the sounds the system expects to most improve its performance. We show through simulations that this model-based active learning approach learns high-level knobs on certain classes of target concepts faster than several baselines, and give examples of the resulting automaticallyconstructed knobs which adjust levels of non-linear, highlevel concepts

    ISSE

    No full text
    corecore