129,970 research outputs found

    Confessions of a live coder

    Get PDF
    This paper describes the process involved when a live coder decides to learn a new musical programming language of another paradigm. The paper introduces the problems of running comparative experiments, or user studies, within the field of live coding. It suggests that an autoethnographic account of the process can be helpful for understanding the technological conditioning of contemporary musical tools. The author is conducting a larger research project on this theme: the part presented in this paper describes the adoption of a new musical programming environment, Impromptu, and how this affects the author’s musical practice

    Computers in Support of Musical Expression

    Get PDF

    The acoustic, the digital and the body: a survey on musical instruments

    Get PDF
    This paper reports on a survey conducted in the autumn of 2006 with the objective to understand people's relationship to their musical tools. The survey focused on the question of embodiment and its different modalities in the fields of acoustic and digital instruments. The questions of control, instrumental entropy, limitations and creativity were addressed in relation to people's activities of playing, creating or modifying their instruments. The approach used in the survey was phenomenological, i.e. we were concerned with the experience of playing, composing for and designing digital or acoustic instruments. At the time of analysis, we had 209 replies from musicians, composers, engineers, designers, artists and others interested in this topic. The survey was mainly aimed at instrumentalists and people who create their own instruments or compositions in flexible audio programming environments such as SuperCollider, Pure Data, ChucK, Max/MSP, CSound, etc

    A microtonal wind controller building on Yamaha’s technology to facilitate the performance of music based on the “19-EDO” scale

    Get PDF
    We describe a project in which several collaborators adapted an existing instrument to make it capable of playing expressively in music based on the microtonal scale characterised by equal divsion of the octave into 19 tones (“19-EDO”). Our objective was not just to build this instrument, however, but also to produce a well-formed piece of music which would exploit it idiomatically, in a performance which would provide listeners with a pleasurable and satisfying musical experience. Hence, consideration of the extent and limits of the playing-techniques of the resulting instrument (a “Wind-Controller”) and of appropriate approaches to the composition of music for it were an integral part of the project from the start. Moreover, the intention was also that the piece, though grounded in the musical characteristics of the 19-EDO scale, would nevertheless have a recognisable relationship with what Dimitri Tymoczko (2010) has called the “Extended Common Practice” of the last millennium. So the article goes on to consider these matters, and to present a score of the resulting new piece, annotated with comments documenting some of the performance issues which it raises. Thus, bringing the project to fruition involved elements of composition, performance, engineering and computing, and the article describes how such an inter-disciplinary, multi-disciplinary and cross-disciplinary collaboration was co-ordinated in a unified manner to achieve the envisaged outcome. Finally, we consider why the building of microtonal instruments is such a problematic issue in a contemporary (“high-tech”) society like ours

    Deep Cross-Modal Audio-Visual Generation

    Full text link
    Cross-modal audio-visual perception has been a long-lasting topic in psychology and neurology, and various studies have discovered strong correlations in human perception of auditory and visual stimuli. Despite works in computational multimodal modeling, the problem of cross-modal audio-visual generation has not been systematically studied in the literature. In this paper, we make the first attempt to solve this cross-modal generation problem leveraging the power of deep generative adversarial training. Specifically, we use conditional generative adversarial networks to achieve cross-modal audio-visual generation of musical performances. We explore different encoding methods for audio and visual signals, and work on two scenarios: instrument-oriented generation and pose-oriented generation. Being the first to explore this new problem, we compose two new datasets with pairs of images and sounds of musical performances of different instruments. Our experiments using both classification and human evaluations demonstrate that our model has the ability to generate one modality, i.e., audio/visual, from the other modality, i.e., visual/audio, to a good extent. Our experiments on various design choices along with the datasets will facilitate future research in this new problem space

    Multiple Media Interfaces for Music Therapy

    Get PDF
    This article describes interfaces (and the supporting technological infrastructure) to create audiovisual instruments for use in music therapy. In considering how the multidimensional nature of sound requires multidimensional input control, we propose a model to help designers manage the complex mapping between input devices and multiple media software. We also itemize a research agenda

    BitBox!:A case study interface for teaching real-time adaptive music composition for video games

    Get PDF
    Real-time adaptive music is now well-established as a popular medium, largely through its use in video game soundtracks. Commercial packages, such as fmod, make freely available the underlying technical methods for use in educational contexts, making adaptive music technologies accessible to students. Writing adaptive music, however, presents a significant learning challenge, not least because it requires a different mode of thought, and tutor and learner may have few mutual points of connection in discovering and understanding the musical drivers, relationships and structures in these works. This article discusses the creation of ‘BitBox!’, a gestural music interface designed to deconstruct and explain the component elements of adaptive composition through interactive play. The interface was displayed at the Dare Protoplay games exposition in Dundee in August 2014. The initial proof-of- concept study proved successful, suggesting possible refinements in design and a broader range of applications

    Music

    Get PDF

    A view of computer music from New Zealand: Auckland, Waikato and the Asia/Pacific connection

    Get PDF
    Dealing predominantly with ‘art music’ aspects of electroacoustic music practice, this paper looks at cultural, aesthetic, environmental and technical influences on current and emerging practices from the upper half of the North Island of New Zealand. It also discusses the influences of Asian and Pacific cultures on the idiom locally. Rather than dwell on the similarities with current international styles, the focus is largely on some of the differences
    • …
    corecore