278 research outputs found

    A multimodal framework for interactive sonification and sound-based communication

    Get PDF

    Interaction Design for Digital Musical Instruments

    Get PDF
    The thesis aims to elucidate the process of designing interactive systems for musical performance that combine software and hardware in an intuitive and elegant fashion. The original contribution to knowledge consists of: (1) a critical assessment of recent trends in digital musical instrument design, (2) a descriptive model of interaction design for the digital musician and (3) a highly customisable multi-touch performance system that was designed in accordance with the model. Digital musical instruments are composed of a separate control interface and a sound generation system that exchange information. When designing the way in which a digital musical instrument responds to the actions of a performer, we are creating a layer of interactive behaviour that is abstracted from the physical controls. Often, the structure of this layer depends heavily upon: 1. The accepted design conventions of the hardware in use 2. Established musical systems, acoustic or digital 3. The physical configuration of the hardware devices and the grouping of controls that such configuration suggests This thesis proposes an alternate way to approach the design of digital musical instrument behaviour – examining the implicit characteristics of its composite devices. When we separate the conversational ability of a particular sensor type from its hardware body, we can look in a new way at the actual communication tools at the heart of the device. We can subsequently combine these separate pieces using a series of generic interaction strategies in order to create rich interactive experiences that are not immediately obvious or directly inspired by the physical properties of the hardware. This research ultimately aims to enhance and clarify the existing toolkit of interaction design for the digital musician

    Audiovisual granular synthesis: creating synergistic relationships between sound and image

    Get PDF
    The aims of this research were to investigate how an audio processing technique known as granular synthesis can be translated to a visual processing equivalent, and to develop software that fuses audiovisual relationships for the creation of real-time audiovisual art. In order to carry out this project, two main research questions were posed. The first question was: how can audio processing techniques such as granular synthesis be adapted and applied to influence new visual performance techniques, and the second question was: how can computer software synergistically integrate audio and visuals to enable the real-time creation and performance of audiovisual art. The project at the centre of my research was the creation of a real-time audiovisual granular synthesis instrument named Kortex. The research project involved a practice-based methodology and used an iterative performance cycle to evaluate and develop the Kortex prototype. These included performing iterations of the Kortex prototype at a number of local, interstate and international events. Kortex facilitates the identification of shared characteristics found between sound and image at the micro and macro level. The micro level addresses individual audiovisual segments, or grains, while the macro level addresses post-processing effects applied to the stream of audiovisual grains. Audiovisual characteristics are paired together by the user at each level, enabling composition with both media simultaneously. This provides the audiovisual artist with a dynamic approach for the creation of new works. Creating relationships between image and sound is highly subjective, yet an artist may use a mathematical, metaphorical/intuitive or intrinsic approach to create a convincing correlation between the two media. The mathematical approach expresses the relationship between sound and image as an equation. Metaphorical/intuitive relationships are formed when the two media share similar emotional or perceptual characteristics, while intrinsic relationships occur when audio and visual media are synthesised from the same source. Performers need powerful control strategies to manipulate large collections of variables in real-time. I found that pattern-generating modulation sources created overlapping phrases that evolved the behaviour of audiovisual relationships. Furthermore, saving interesting aesthetics that emerged into banks of presets, along with the ability to slide from one to the next, facilitated powerful transformations during a performance. The project has contributed to the field of audiovisual art, specifically to the performance work of DJs and VJs. Kortex provides a single audiovisual composition and performance environment that can be used by DJs and VJs for creative collaboration. Kortex has enormous potential for adoption by the DJ/VJ community to assist in the production of tightly synchronised real-time audiovisual performances

    3D Composer: A Software for Micro-composition

    Get PDF
    The aim of this compositional research project is to find new paradigms of expression and representation of musical information, supported by technology. This may further our understanding of how artistic intention materialises during the production of a musical work. A further aim is to create a software device, which will allow the user to generate, analyse and manipulate abstract musical information within a multi-dimensional environment. The main intent of this software and composition portfolio is to examine the process involved during the development of a compositional tool to verify how transformations applied to the conceptualisation of musical abstraction will affect musical outcome, and demonstrate how this transformational process would be useful in a creative context. This thesis suggests a reflection upon various technological and conceptual aspects within a dynamic multimedia framework. The discussion situates the artistic work of a composer within the technological sphere, and investigates the role of technology and its influences during the creative process. Notions of space are relocated in the scope of a personal compositional direction in order to develop a new framework for musical creation. The author establishes theoretical ramifications and suggests a definition for micro-composition. The main aspect focuses on the ability to establish a direct conceptual link between visual elements and their correlated musical output, ultimately leading to the design of a software called 3D-Composer, a tool for the visualisation of musical information as a means to assist composers to create works within a new methodological and conceptual realm. Of particular importance is the ability to transform musical structures in three-dimensional space, based on the geometric properties of micro-composition. The compositions Six Electroacoustic Studies and Dada 2009 display the use of the software. The formalisation process was derived from a transposition of influences of the early twentieth century avant-garde period, to a contemporary digital studio environment utilising new media and computer technologies for musical expression

    X Reality Networked Performance: Message Based Distributed Systems For Controlling And Presenting Multiple Realities

    Get PDF
    X reality networked performances connect physical, fictional and computer generated realities in a new world of performance, one that is without geographical bounds and that can include many physical locations—with their own performers and audience members— within a single event. They explore a unique medium while drawing on historical and contemporary performing arts practices that normally occur within the confines of a single physical location. Such performances present a special set of requirements on the system that supports them. They need to access and integrate all the systems that are typically found in the physical place of the performance (such as theatre lighting) with those that are unique to the medium, such as network technologies and environments for the delivery of computer generated realities. Yet, no suitable systems or frameworks have been developed to support them. Technologies are available (such as LoLA and UltraGrid) that support individual aspects—like audio/video streaming—but which do not address the wider requirements of controlling and synchronising, of integrating all these technologies into a system of systems for X reality networked performance. Therefore, this research investigates the creation of a systems framework whereby existing hardware and software components can be continuously integrated with bespoke components to provide a platform for the delivery of X reality networked performances. The methodological approach to this investigation is through the lens of the author’s previous experience in other fields of complex systems integration, including, approaches employed in the design and integration of avionics systems. Specifically, it tests if a systems integration approach to providing a technical platform for X reality networked performances, one that employs strongly‐defined interfaces and communication protocols, and that is based on open and industry standards, delivers an elegant platform that can be characterised as: deterministic, reliable, extendable, scalable, reconfigurable, testable and cost effective. iii The platform for X reality networked performance has been developed iteratively—using the results of a framework investigation—and tested in four different performance projects over a period of 24‐months, in ten different venues, across five countries. The research concludes that the enabling framework is well suited to the delivery of X reality networked performances. Also, that the approaches employed could equally be applied to the needs of other arts practitioners who rely on complex technical systems for the creation and delivery of their work

    Scanning Spaces: Paradigms for Spatial Sonification and Synthesis

    Get PDF
    In 1962 Karlheinz Stockhausen’s “Concept of Unity in Electronic Music” introduced a connection between the parameters of intensity, duration, pitch, and timbre using an accelerating pulse train. In 1973 John Chowning discovered that complex audio spectra could be synthesized by increasing vibrato rates past 20Hz. In both cases the notion of acceleration to produce timbre was critical to discovery. Although both composers also utilized sound spatialization in their works, spatial parameters were not unified with their synthesis techniques. This dissertation examines software studies and multimedia works involving the use of spatial and visual data to produce complex sound spectra. The culmination of these experiments, Spatial Modulation Synthesis, is introduced as a novel, mathematical control paradigm for audio-visual synthesis, providing unified control of spatialization, timbre, and visual form using high-speed sound trajectories.The unique, visual sonification and spatialization rendering paradigms of this disser- tation necessitated the development of an original audio-sample-rate graphics rendering implementation, which, unlike typical multimedia frameworks, provides an exchange of audio-visual data without downsampling or interpolation

    MAMI Tech Toolkit Utilising Action Research to Develop a Technological Toolkit to Facilitate Access to Music-Making

    Get PDF
    Music is essential to most of us, it can light up all areas of the brain, help develop skills with communication, help to establish identity, and allow a unique path for expression. However, barriers to access or gaps in provision can restrict access to music-making and sound exploration for some people. Research has shown that technology can provide unique tools to access music-making but that technology is underused by practitioners. This action research project details the development and design of a technological toolkit called MAMI – the Modular Accessible Musical Instrument technology toolkit - in conjunction with stakeholders from four research sites. Stakeholders included music therapists, teachers, community musicians, and children and young people. The overarching aims of the research were: to explore how technology was incorporated into practices of music creation and sound exploration; to explore the issues that stakeholders had with current music technology; to create novel musical tools and tools that match criteria as specified by stakeholders, and address issues as found in a literature review; to assess the effectiveness of these novel tools with a view to improving practices; and to navigate propagation of the practices, technologies, and methods used to allow for transferability into the wider ecology. Outcomes of the research include: a set of design considerations that contribute to knowledge around the design and practical use of technological tools for music-making in special educational needs settings; a series of methodological considerations to help future researchers and developers navigate the process of using action research to create new technological tools with stakeholders; and the MAMI Tech Toolkit – a suite of four bespoke hardware tools and accompanying software - as an embodiment of the themes that emerged from: the cycles of action research; the design considerations; and a philosophical understanding of music creation that foregrounds it as an situated activity within a social context

    MAMI tech toolkit: utilising action research to develop a technological toolkit to facilitate access to music-making.

    Get PDF
    Music is essential to most of us, it can light up all areas of the brain, help develop skills with communication, help to establish identity, and allow a unique path for expression. However, barriers to access or gaps in provision can restrict access to music-making and sound exploration for some people. Research has shown that technology can provide unique tools to access music-making but that technology is underused by practitioners. This action research project details the development and design of a technological toolkit called MAMI – the Modular Accessible Musical Instrument technology toolkit - in conjunction with stakeholders from four research sites. Stakeholders included music therapists, teachers, community musicians, and children and young people. The overarching aims of the research were: to explore how technology was incorporated into practices of music creation and sound exploration; to explore the issues that stakeholders had with current music technology; to create novel musical tools and tools that match criteria as specified by stakeholders, and address issues as found in a literature review; to assess the effectiveness of these novel tools with a view to improving practices; and to navigate propagation of the practices, technologies, and methods used to allow for transferability into the wider ecology. Outcomes of the research include: a set of design considerations that contribute to knowledge around the design and practical use of technological tools for music-making in special educational needs settings; a series of methodological considerations to help future researchers and developers navigate the process of using action research to create new technological tools with stakeholders; and the MAMI Tech Toolkit – a suite of four bespoke hardware tools and accompanying software - as an embodiment of the themes that emerged from: the cycles of action research; the design considerations; and a philosophical understanding of music creation that foregrounds it as an situated activity within a social context
    corecore