88,600 research outputs found

    Barry Truax Riverrun (1986/2004), a case study from the TaCEM project, exploring new approaches to techniques of analysis and re-synthesis in the study of concert electroacoustic works

    Get PDF
    At last year’s EMS in Lisbon we introduced the TaCEM project (Technology and Creativity in Electroacoustic Music), a 30-month project funded by the UK’s Arts and Humanities Research Council, and demonstrated the generic TIAALS software being produced as part of this project. This year we present an update on the project, focusing particularly on the first of our case studies, Barry Truax’s Riverrun. Eight works have been selected for the project, taking into account criteria such as historical context, the nature of the synthesis techniques employed, and the aesthetics that have underpinned their realisation. Key considerations have included the accessibility of the technical resources and composing materials used in their production, and opportunities to pursue particular lines of enquiry with the composer concerned. In selecting the eight works for detailed study, a further consideration has been the extent to which the composers explored techniques that were already available at the time in ways that are unique and distinctive, or alternatively developed entirely new methods of synthesis in pursuit of their creative goals. The pioneering work of Barry Truax in terms of developing techniques of granular synthesis assign his achievements almost exclusively to the latter classification, and the composition of Riverrun (1986/2004) is a landmark achievement in this regard. Truax’s composing environment evolved from the early study of interactive real-time synthesis techniques at the Institute of Sonology, Utrecht 1971-73, exploring the possibilities of using Poisson-ordered distributions in the generation of microsound, to the emergence of entirely granular techniques at Simon Fraser University, British Columbia a decade later, culminating in the development of his program GSX designed specifically for waveform-based synthesis and first used to compose Riverrun, and its later extension, GSAMX, that extended these granular techniques to include the manipulation of previously sampled sound material. At the time of composition conventional minicomputers still lacked the capacity to generate multiple voices of granulated sound material in real time, but for Truax the acquisition in 1982 of a high speed bit slice array processor, the DMX-1000, provided the enhancedprocessing power necessary for achieving such a goal. The unique characteristics of its special hardware and associated programming environment, managed in turn via a host PDP 11/23 computer, both empowered his creative objectives and also materially shaped and influenced the ways in which they could be practically achieved. The significance of such causal relationships in the evolution of the electroacoustic music repertory has yet to be widely understood, and this study of Riverrun corroborates the importance of such a line of investigation. In this case it has been possible to carry out a detailed study of the original system, still maintained in working order by Truax, leading to a reconstruction of key elements of Riverrun using a Max-based simulation of GSX, the authenticity of the results being assessed both subjectively by means of a direct aural comparison and also measured objectively using software. Our presentation at this year’s EMS in Berlin included a demonstration of examples of the software we have developed to enable readers to engage with Riverrun interactively, both by analysing the original recordings and by using our emulation of the GSX system to be able to recreate passages of the work and manipulate the techniques employed in order to learn more about them. We also gave examples of other materials we have collected in relation to this case study, including videos of the composer himself working with the GSX system and discussing the composition of Riverrun

    Interfacing the Network: An Embedded Approach to Network Instrument Creation

    Get PDF
    This paper discusses the design, construction, and development of a multi-site collaborative instrument, The Loop, developed by the JacksOn4 collective during 2009-10 and formally presented in Oslo at the arts.on.wires and NIME conferences in 2011. The development of this instrument is primarily a reaction to historical network performance that either attempts to present traditional acoustic practice in a distributed format or utilises the network as a conduit to shuttle acoustic and performance data amongst participant nodes. In both scenarios the network is an integral and indispensible part of the performance, however, the network is not perceived as an instrument, per se. The Loop is an attempt to create a single, distributed hybrid instrument retaining traditionally acoustic interfaces and resonant bodies that are mediated by the network. The embedding of the network into the body of the instrument raises many practical and theoretical discussions, which are explored in this paper through a reflection upon the notion of the distributed instrument and the way in which its design impacts the behaviour of the participants (performers and audiences); the mediation of musical expression across networks; the bi-directional relationship between instrument and design; as well as how the instrument assists in the realisation of the creators’ compositional and artistic goals

    A Framework for Evaluating Model-Driven Self-adaptive Software Systems

    Get PDF
    In the last few years, Model Driven Development (MDD), Component-based Software Development (CBSD), and context-oriented software have become interesting alternatives for the design and construction of self-adaptive software systems. In general, the ultimate goal of these technologies is to be able to reduce development costs and effort, while improving the modularity, flexibility, adaptability, and reliability of software systems. An analysis of these technologies shows them all to include the principle of the separation of concerns, and their further integration is a key factor to obtaining high-quality and self-adaptable software systems. Each technology identifies different concerns and deals with them separately in order to specify the design of the self-adaptive applications, and, at the same time, support software with adaptability and context-awareness. This research studies the development methodologies that employ the principles of model-driven development in building self-adaptive software systems. To this aim, this article proposes an evaluation framework for analysing and evaluating the features of model-driven approaches and their ability to support software with self-adaptability and dependability in highly dynamic contextual environment. Such evaluation framework can facilitate the software developers on selecting a development methodology that suits their software requirements and reduces the development effort of building self-adaptive software systems. This study highlights the major drawbacks of the propped model-driven approaches in the related works, and emphasise on considering the volatile aspects of self-adaptive software in the analysis, design and implementation phases of the development methodologies. In addition, we argue that the development methodologies should leave the selection of modelling languages and modelling tools to the software developers.Comment: model-driven architecture, COP, AOP, component composition, self-adaptive application, context oriented software developmen

    Immersive Composition for Sensory Rehabilitation: 3D Visualisation, Surround Sound, and Synthesised Music to Provoke Catharsis and Healing

    Get PDF
    There is a wide range of sensory therapies using sound, music and visual stimuli. Some focus on soothing or distracting stimuli such as natural sounds or classical music as analgesic, while other approaches emphasize the active performance of producing music as therapy. This paper proposes an immersive multi-sensory Exposure Therapy for people suffering from anxiety disorders, based on a rich, detailed surround-soundscape. This soundscape is composed to include the users’ own idiosyncratic anxiety triggers as a form of habituation, and to provoke psychological catharsis, as a non-verbal, visceral and enveloping exposure. To accurately pinpoint the most effective sounds and to optimally compose the soundscape we will monitor the participants’ physiological responses such as electroencephalography, respiration, electromyography, and heart rate during exposure. We hypothesize that such physiologically optimized sensory landscapes will aid the development of future immersive therapies for various psychological conditions, Sound is a major trigger of anxiety, and auditory hypersensitivity is an extremely problematic symptom. Exposure to stress-inducing sounds can free anxiety sufferers from entrenched avoidance behaviors, teaching physiological coping strategies and encouraging resolution of the psychological issues agitated by the sound

    Sensing and mapping for interactive performance

    Get PDF
    This paper describes a trans-domain mapping (TDM) framework for translating meaningful activities from one creative domain onto another. The multi-disciplinary framework is designed to facilitate an intuitive and non-intrusive interactive multimedia performance interface that offers the users or performers real-time control of multimedia events using their physical movements. It is intended to be a highly dynamic real-time performance tool, sensing and tracking activities and changes, in order to provide interactive multimedia performances. From a straightforward definition of the TDM framework, this paper reports several implementations and multi-disciplinary collaborative projects using the proposed framework, including a motion and colour-sensitive system, a sensor-based system for triggering musical events, and a distributed multimedia server for audio mapping of a real-time face tracker, and discusses different aspects of mapping strategies in their context. Plausible future directions, developments and exploration with the proposed framework, including stage augmenta tion, virtual and augmented reality, which involve sensing and mapping of physical and non-physical changes onto multimedia control events, are discussed

    Assessing a Collaborative Online Environment for Music Composition

    Get PDF
    The current pilot study tested the effectiveness of an e-learning environment built to enable students to compose music collaboratively. The participants interacted online by using synchronous and asynchronous resources to develop a project in which they composed a new music piece in collaboration. After the learning sessions, individual semi-structured interviews with the participants were conducted to analyze the participants\u2019 perspectives regarding the e-learning environment\u2019s functionality, the resources of the e-learning platform, and their overall experience with the e-learning process. Qualitative analyses of forum discussions with respect to metacognitive dimensions, and semi-structured interview transcriptions were performed. The findings showed that the participants successfully completed the composition task in the virtual environment, and that they demonstrated the use of metacognitive processes. Moreover, four themes were apparent in the semi-structured interview transcriptions: Teamwork, the platform, face-to-face/online differences, and strengths/weaknesses. Overall, the participants exhibited an awareness of the potential of the online tools, and the task performed. The results are discussed in consideration of metacognitive processes, and the following aspects that rendered virtual activity effective for learning: The learning environment, the platform, the technological resources, the level of challenge, and the nature of the activity. The possible implications of the findings for research on online collaborative composition are also considered

    The Original Beat: An Electronic Music Production System and Its Design

    Get PDF
    The barrier to entry in electronic music production is high. It requires expensive, complicated software, extensive knowledge of music theory and experience with sound generation. Digital Audio Workstations (DAWs) are the main tools used to piece together digital sounds and produce a complete song. While these DAWs are great for music professionals, they have a steep learning curve for beginners and they must run native on a user’s computer. For a novice to begin creating music takes much more time, eort, and money than it should. We believe anyone who is interested in creating electronic music deserves a simple way to digitize their ideas and hear results. With this idea in mind, we created a web based, co-creative system to allow beginners and pros alike to easily create electronic digital music. We outline the requirements for such a system and detail the design and architecture. We go through the specifics of the system we implemented covering the front-end, back-end, server, and generation algorithms. Finally, we will review our development time-line, examine the challenges and risks that arose when building our system, and present future improvements

    Physically inspired interactive music machines: making contemporary composition accessible?

    Get PDF
    Much of what we might call "high-art music" occupies the difficult end of listening for contemporary audiences. Concepts such as pitch, meter and even musical instruments often have little to do with such music, where all sound is typically considered as possessing musical potential. As a result, such music can be challenging to educationalists, for students have few familiar pointers in discovering and understanding the gestures, relationships and structures in these works. This paper describes on-going projects at the University of Hertfordshire that adopt an approach of mapping interactions within visual spaces onto musical sound. These provide a causal explanation for the patterns and sequences heard, whilst incorporating web interoperability thus enabling potential for distance learning applications. While so far these have mainly driven pitch-based events using MIDI or audio files, it is hoped to extend the ideas using appropriate technology into fully developed composition tools, aiding the teaching of both appreciation/analysis and composition of contemporary music
    • …
    corecore