68,366 research outputs found

    The ixiQuarks: merging code and GUI in one creative space

    Get PDF
    This paper reports on ixiQuarks; an environment of instruments and effects that is built on top of the audio programming language SuperCollider. The rationale of these instruments is to explore alternative ways of designing musical interaction in screen-based software, and investigate how semiotics in interface design affects the musical output. The ixiQuarks are part of external libraries available to SuperCollider through the Quarks system. They are software instruments based on a non- realist design ideology that rejects the simulation of acoustic instruments or music hardware and focuses on experimentation at the level of musical interaction. In this environment we try to merge the graphical with the textual in the same instruments, allowing the user to reprogram and change parts of them in runtime. After a short introduction to SuperCollider and the Quark system, we will describe the ixiQuarks and the philosophical basis of their design. We conclude by looking at how they can be seen as epistemic tools that influence the musician in a complex hermeneutic circle of interpretation and signification

    Multiple Media Interfaces for Music Therapy

    Get PDF
    This article describes interfaces (and the supporting technological infrastructure) to create audiovisual instruments for use in music therapy. In considering how the multidimensional nature of sound requires multidimensional input control, we propose a model to help designers manage the complex mapping between input devices and multiple media software. We also itemize a research agenda

    Towards musical interaction : 'Schismatics' for e-violin and computer.

    Get PDF
    This paper discusses the evolution of the Max/MSP patch used in schismatics (2007, rev. 2010) for electric violin (Violectra) and computer, by composer Sam Hayden in collaboration with violinist Mieko Kanno. schismatics involves a standard performance paradigm of a fixed notated part for the e-violin with sonically unfixed live computer processing. Hayden was unsatisfied with the early version of the piece: the use of attack detection on the live e-violin playing to trigger stochastic processes led to an essentially reactive behaviour in the computer, resulting in a somewhat predictable one-toone sonic relationship between them. It demonstrated little internal relationship between the two beyond an initial e-violin ‘action’ causing a computer ‘event’. The revisions in 2010, enabled by an AHRC Practice-Led research award, aimed to achieve 1) a more interactive performance situation and 2) a subtler and more ‘musical’ relationship between live and processed sounds. This was realised through the introduction of sound analysis objects, in particular machine listening and learning techniques developed by Nick Collins. One aspect of the programming was the mapping of analysis data to synthesis parameters, enabling the computer transformations of the e-violin to be directly related to Kanno’s interpretation of the piece in performance

    Confessions of a live coder

    Get PDF
    This paper describes the process involved when a live coder decides to learn a new musical programming language of another paradigm. The paper introduces the problems of running comparative experiments, or user studies, within the field of live coding. It suggests that an autoethnographic account of the process can be helpful for understanding the technological conditioning of contemporary musical tools. The author is conducting a larger research project on this theme: the part presented in this paper describes the adoption of a new musical programming environment, Impromptu, and how this affects the author’s musical practice

    Sketching sonic interactions by imitation-driven sound synthesis

    Get PDF
    Sketching is at the core of every design activity. In visual design, pencil and paper are the preferred tools to produce sketches for their simplicity and immediacy. Analogue tools for sonic sketching do not exist yet, although voice and gesture are embodied abilities commonly exploited to communicate sound concepts. The EU project SkAT-VG aims to support vocal sketching with computeraided technologies that can be easily accessed, understood and controlled through vocal and gestural imitations. This imitation-driven sound synthesis approach is meant to overcome the ephemerality and timbral limitations of human voice and gesture, allowing to produce more refined sonic sketches and to think about sound in a more designerly way. This paper presents two main outcomes of the project: The Sound Design Toolkit, a palette of basic sound synthesis models grounded on ecological perception and physical description of sound-producing phenomena, and SkAT-Studio, a visual framework based on sound design workflows organized in stages of input, analysis, mapping, synthesis, and output. The integration of these two software packages provides an environment in which sound designers can go from concepts, through exploration and mocking-up, to prototyping in sonic interaction design, taking advantage of all the possibilities of- fered by vocal and gestural imitations in every step of the process
    corecore