123 research outputs found

    Designing relational pedagogies with jam2jamXO

    Get PDF
    This paper examines the affordances of the philosophy and practice of open source and the application of it in developing music education software. In particular I will examine the parallels inherent in the ‘openness’ of pragmatist philosophy in education (Dewey 1916, 1989) such as group or collaborative learning, discovery learning (Bruner 1966) and learning through creative activity with computers (Papert 1980, 1994). Primarily I am interested in ‘relational pedagogies’ (Ruthmann and Dillon In Press) which is in a real sense about the ethics of the transaction between student and teacher in an ecology where technology plays a more significant role. In these contexts relational pedagogies refers to how the music teacher manages their relationships with students and evaluates the affordances of open source technology in that process. It is concerned directly with how the relationship between student and teacher is affected by the technological tools, as is the capacity for music making and learning. In particular technologies that have agency present the opportunity for a partnership between user and technology that enhances the capacity for expressive music making, productive social interaction and learning. In this instance technologies with agency are defined as ones that enhance the capacity to be expressive and perform tasks with virtuosity and complexity where the technology translates simple commands and gestures into complex outcomes. The technology enacts a partnership with the user that becomes both a cognitive and performative amplifier. Specifically we have used this term to describe interactions with generative technologies that use procedural invention as a creative technique to produce music and visual media

    On the design of Csound 5

    Get PDF
    Abstract Csound has been in existence for many years, and is a direct descendant of the MusicV family. For a decade development of the system has continued, via some language changes, new operations and the necessary bug fixes. Two years ago a small group of us decided that rather than continue the incremental process, a code freeze and rethink was needed. In this paper we consider the design and aims for what has been called Csound5, and describe the processes and achievements of the implementation

    On the design of Csound 5

    Get PDF
    Abstract Csound has been in existence for many years, and is a direct descendant of the MusicV family. For a decade development of the system has continued, via some language changes, new operations and the necessary bug fixes. Two years ago a small group of us decided that rather than continue the incremental process, a code freeze and rethink was needed. In this paper we consider the design and aims for what has been called Csound5, and describe the processes and achievements of the implementation

    SameSameButDifferent v.02 – Iceland

    Get PDF
    The history of computer music is to a great extent the history of algorithmic composition. Here generative approaches are seen as an artistic technique. However, the generation of algorithmic music is normally done in the studio, where the music is aesthetically valued by the composer. The public only gets to know one, or perhaps few, variations of the expressive scope of the algorithmic system itself. In this paper, we describe a generative music system of infinite compositions, where the system itself is aimed for distribution and to be used on personal computers. This system has a dual structure of a compositional score and a performer that performs the score in real-time every time a piece is played. We trace the contextual background of such systems and potential future applications

    A view of computer music from New Zealand: Auckland, Waikato and the Asia/Pacific connection

    Get PDF
    Dealing predominantly with ‘art music’ aspects of electroacoustic music practice, this paper looks at cultural, aesthetic, environmental and technical influences on current and emerging practices from the upper half of the North Island of New Zealand. It also discusses the influences of Asian and Pacific cultures on the idiom locally. Rather than dwell on the similarities with current international styles, the focus is largely on some of the differences

    Developing a flexible and expressive realtime polyphonic wave terrain synthesis instrument based on a visual and multidimensional methodology

    Get PDF
    The Jitter extended library for Max/MSP is distributed with a gamut of tools for the generation, processing, storage, and visual display of multidimensional data structures. With additional support for a wide range of media types, and the interaction between these mediums, the environment presents a perfect working ground for Wave Terrain Synthesis. This research details the practical development of a realtime Wave Terrain Synthesis instrument within the Max/MSP programming environment utilizing the Jitter extended library. Various graphical processing routines are explored in relation to their potential use for Wave Terrain Synthesis

    On the synthesis and processing of high quality audio signals by parallel computers

    Get PDF
    This work concerns the application of new computer architectures to the creation and manipulation of high-quality audio bandwidth signals. The configuration of both the hardware and software in such systems falls under consideration in the three major sections which present increasing levels of algorithmic concurrency. In the first section, the programs which are described are distributed in identical copies across an array of processing elements; these programs run autonomously, generating data independently, but with control parameters peculiar to each copy: this type of concurrency is referred to as isonomic}The central section presents a structure which distributes tasks across an arbitrary network of processors; the flow of control in such a program is quasi- indeterminate, and controlled on a demand basis by the rate of completion of the slave tasks and their irregular interaction with the master. Whilst that interaction is, in principle, deterministic, it is also data-dependent; the dynamic nature of task allocation demands that no a priori knowledge of the rate of task completion be required. This type of concurrency is called dianomic? Finally, an architecture is described which will support a very high level of algorithmic concurrency. The programs which make efficient use of such a machine are designed not by considering flow of control, but by considering flow of data. Each atomic algorithmic unit is made as simple as possible, which results in the extensive distribution of a program over very many processing elements. Programs designed by considering only the optimum data exchange routes are said to exhibit systolic^ concurrency. Often neglected in the study of system design are those provisions necessary for practical implementations. It was intended to provide users with useful application programs in fulfilment of this study; the target group is electroacoustic composers, who use digital signal processing techniques in the context of musical composition. Some of the algorithms in use in this field are highly complex, often requiring a quantity of processing for each sample which exceeds that currently available even from very powerful computers. Consequently, applications tend to operate not in 'real-time' (where the output of a system responds to its input apparently instantaneously), but by the manipulation of sounds recorded digitally on a mass storage device. The first two sections adopt existing, public-domain software, and seek to increase its speed of execution significantly by parallel techniques, with the minimum compromise of functionality and ease of use. Those chosen are the general- purpose direct synthesis program CSOUND, from M.I.T., and a stand-alone phase vocoder system from the C.D.P..(^4) In each case, the desired aim is achieved: to increase speed of execution by two orders of magnitude over the systems currently in use by composers. This requires substantial restructuring of the programs, and careful consideration of the best computer architectures on which they are to run concurrently. The third section examines the rationale behind the use of computers in music, and begins with the implementation of a sophisticated electronic musical instrument capable of a degree of expression at least equal to its acoustic counterparts. It seems that the flexible control of such an instrument demands a greater computing resource than the sound synthesis part. A machine has been constructed with the intention of enabling the 'gestural capture' of performance information in real-time; the structure of this computer, which has one hundred and sixty high-performance microprocessors running in parallel, is expounded; and the systolic programming techniques required to take advantage of such an array are illustrated in the Occam programming language

    Real-time sound synthesis on a multi-processor platform

    Get PDF
    Real-time sound synthesis means that the calculation and output of each sound sample for a channel of audio information must be completed within a sample period. At a broadcasting standard, a sampling rate of 32,000 Hz, the maximum period available is 31.25 μsec. Such requirements demand a large amount of data processing power. An effective solution for this problem is a multi-processor platform; a parallel and distributed processing system. The suitability of the MIDI [Music Instrument Digital Interface] standard, published in 1983, as a controller for real-time applications is examined. Many musicians have expressed doubts on the decade old standard's ability for real-time performance. These have been investigated by measuring timing in various musical gestures, and by comparing these with the subjective characteristics of human perception. An implementation and its optimisation of real-time additive synthesis programs on a multi-transputer network are described. A prototype 81-polyphonic-note- organ configuration was implemented. By devising and deploying monitoring processes, the network's performance was measured and enhanced, leading to an efficient usage; the 88-note configuration. Since 88 simultaneous notes are rarely necessary in most performances, a scheduling program for dynamic note allocation was then introduced to achieve further efficiency gains. Considering calculation redundancies still further, a multi-sampling rate approach was applied as a further step to achieve an optimal performance. The theories underlining sound granulation, as a means of constructing complex sounds from grains, and the real-time implementation of this technique are outlined. The idea of sound granulation is quite similar to the quantum-wave theory, "acoustic quanta". Despite the conceptual simplicity, the signal processing requirements set tough demands, providing a challenge for this audio synthesis engine. Three issues arising from the results of the implementations above are discussed; the efficiency of the applications implemented, provisions for new processors and an optimal network architecture for sound synthesis

    Sound mosaics: a graphical user interface for sound synthesis based on audio-visual associations.

    Get PDF
    This thesis presents the design of a Graphical User Interface (GUI) for computer-based sound synthesis to support users in the externalisation of their musical ideas when interacting with the System in order to create and manipulate sound. The approach taken consisted of three research stages. The first stage was the formulation of a novel visualisation framework to display perceptual dimensions of sound in Visual terms. This framework was based on the findings of existing related studies and a series of empirical investigations of the associations between auditory and visual precepts that we performed for the first time in the area of computer-based sound synthesis. The results of our empirical investigations suggested associations between the colour dimensions of brightness and saturation with the auditory dimensions of pitch and loudness respectively, as well as associations between the multidimensional precepts of visual texture and timbre. The second stage of the research involved the design and implementation of Sound Mosaics, a prototype GUI for sound synthesis based on direct manipulation of visual representations that make use of the visualisation framework developed in the first stage. We followed an iterative design approach that involved the design and evaluation of an initial Sound Mosaics prototype. The insights gained during this first iteration assisted us in revising various aspects of the original design and visualisation framework that led to a revised implementation of Sound Mosaics. The final stage of this research involved an evaluation study of the revised Sound Mosaics prototype that comprised two controlled experiments. First, a comparison experiment with the widely used frequency-domain representations of sound indicated that visual representations created with Sound Mosaics were more comprehensible and intuitive. Comprehensibility was measured as the level of accuracy in a series of sound image association tasks, while intuitiveness was related to subjects' response times and perceived levels of confidence. Second, we conducted a formative evaluation of Sound Mosaics, in which it was exposed to a number of users with and without musical background. Three usability factors were measured: effectiveness, efficiency, and subjective satisfaction. Sound Mosaics was demonstrated to perform satisfactorily in ail three factors for music subjects, although non-music subjects yielded less satisfactory results that can be primarily attributed to the subjects' unfamiliarity with the task of sound synthesis. Overall, our research has set the necessary groundwork for empirically derived and validated associations between auditory and visual dimensions that can be used in the design of cognitively useful GUIs for computer-based sound synthesis and related area
    corecore