15 research outputs found

    Guitars with Ambisonic Spatial Performance (GASP) An immersive guitar system

    Get PDF
    The GASP project investigates the design and realisation of an Immersive Guitar System. It brings together a range of sound processing and spatialising technologies and applies them to a specific musical instrument – the Electric Guitar. GASP is an ongoing innovative audio project, fusing the musical with the technical, combining the processing of each string’s output (which we called timbralisation) with spatial sound. It is also an artistic musical project, where space becomes a performance parameter, providing new experimental immersive sound production techniques for the guitarist and music producer. Several ways of reimagining the electric guitar as an immersive sounding instrument have been considered, the primary method using Ambisonics. However, additionally, some complementary performance and production techniques have emerged from the use of divided pickups, supporting both immersive live performance and studio post-production. GASP Live offers performers and audiences new real-time sonic-spatial perspectives, where the guitarist or a Live GASP producer can have real-time control of timbral, spatial, and other performance features, such as: timbral crossfading, switching of split-timbres across strings, spatial movement where Spatial Patterns may be selected and modulated, control of Spatial Tempo, and real-time performance re-tuning. For GASP recording and post-production, individual string note patterns may be visualised in Reaper DAW,2 from which, analyses and judgements can be made to inform post-production decisions for timbralisation and spatialisation. An appreciation of auditory grouping and perceptual streaming (Bregman, 1994) has informed GASP production ideas. For performance monitoring or recorded playback, the immersive audio would typically be heard over a circular array of loudspeakers, or over headphones with head-tracked binaural reproduction. This paper discusses the design of the system and its elements, investigates other applications of divided pickups, namely GASP’s Guitarpeggiator, and reflects on productions made so far

    Guitars with Ambisonic Spatial Performance (GASP): An immersive guitar system

    Get PDF
    The GASP project investigates the design and realisation of an Immersive Guitar System. It brings together a range of sound processing and spatialising technologies and applies them to a specific musical instrument ‒ the Electric Guitar. GASP is an ongoing innovative audio project, fusing the musical with the technical, combining the processing of each stringÊŒs output (which we called timbralisation) with spatial sound. It is also an artistic musical project, where space becomes a performance parameter, providing new experimental immersive sound production techniques for the guitarist and music producer. Several ways of reimagining the electric guitar as an immersive sounding instrument have been considered, the primary method using Ambisonics. However, additionally, some complementary performance and production techniques have emerged from the use of divided pickups, supporting both immersive live performance and studio post-production. GASP Live offers performers and audiences new real-time sonic-spatial perspectives, where the guitarist or a Live GASP producer can have real-time control of timbral, spatial, and other performance features, such as: timbral crossfading, switching of split-timbres across strings, spatial movement where Spatial Patterns may be selected and modulated, control of Spatial Tempo, and real-time performance re-tuning. For GASP recording and post-production, individual string note patterns may be visualised in Reaper DAW,2 from which, analyses and judgements can be made to inform post-production decisions for timbralisation and spatialisation. An appreciation of auditory grouping and perceptual streaming (Bregman, 1994) has informed GASP production ideas. For performance monitoring or recorded playback, the immersive audio would typically be heard over a circular array of loudspeakers, or over headphones with head-tracked binaural reproduction. This paper discusses the design of the system and its elements, investigates other applications of divided pickups, namely GASPÊŒs Guitarpeggiator, and reflects on productions made so far

    Music in Virtual Space: Theories and Techniques for Sound Spatialization and Virtual Reality-Based Stage Performance

    Get PDF
    This research explores virtual reality as a medium for live concert performance. I have realized compositions in which the individual performing on stage uses a VR head-mounted display complemented by other performance controllers to explore a composed virtual space. Movements and objects within the space are used to influence and control sound spatialization and diffusion, musical form, and sonic content. Audience members observe this in real-time, watching the performer\u27s journey through the virtual space on a screen while listening to spatialized audio on loudspeakers variable in number and position. The major artistic challenge I will explore through this activity is the relationship between virtual space and musical form. I will also explore and document the technical challenges of this activity, resulting in a shareable software tool called the Multi-source Ambisonic Spatialization Interface (MASI), which is useful in creating a bridge between VR technologies and associated software, ambisonic spatialization techniques, sound synthesis, and audio playback and effects, and establishes a unique workflow for working with sound in virtual space

    Audio for Virtual, Augmented and Mixed Realities: Proceedings of ICSA 2019 ; 5th International Conference on Spatial Audio ; September 26th to 28th, 2019, Ilmenau, Germany

    Get PDF
    The ICSA 2019 focuses on a multidisciplinary bringing together of developers, scientists, users, and content creators of and for spatial audio systems and services. A special focus is on audio for so-called virtual, augmented, and mixed realities. The fields of ICSA 2019 are: - Development and scientific investigation of technical systems and services for spatial audio recording, processing and reproduction / - Creation of content for reproduction via spatial audio systems and services / - Use and application of spatial audio systems and content presentation services / - Media impact of content and spatial audio systems and services from the point of view of media science. The ICSA 2019 is organized by VDT and TU Ilmenau with support of Fraunhofer Institute for Digital Media Technology IDMT

    A distributed approach to surround sound production

    Get PDF
    The requirement for multi-channel surround sound in audio production applications is growing rapidly. Audio processing in these applications can be costly, particularly in multi-channel systems. A distributed approach is proposed for the development of a realtime spatialization system for surround sound music production, using Ambisonic surround sound methods. The latency in the system is analyzed, with a focus on the audio processing and network delays, in order to ascertain the feasibility of an enhanced, distributed real-time spatialization system

    Time, Space, Memory: A portfolio of acousmatic compositions

    Get PDF
    This portfolio comprises of a collection of acousmatic works which investigate the role of source bonding in music – the tendency of listeners to relate sounds to their real-world sources and the signifying implication of such a link – with a particular focus on how spatial design can contribute towards source-bonding in the music’s perception as a holistic spatio-sonic entity. A number of compositional strategies, multichannel formats and spatial audio technologies are investigated, with their merits assessed based on their suitability for shaping the qualities of musical space explored. The discussion in this commentary will show how these holistic spaces can have similar qualities of perceived ‘reality’ and ‘abstraction’ to the individual sounds, and how this is investigated in the musical works. I shall also show how the contrasting environmental qualities of these spaces became a source of of inspiration for structuring the development of my music, and how they might evoke subsequent meaning in their experience based on the listener’s understanding of the spatial source bonds

    Simulation and analysis of spatial audio reproduction and listening area effects

    Get PDF
    Loudspeaker-based spatial audio systems are often designed with the aim to create an auditory event or scene to a listener positioned in the optimal listening position. However, in real-world domestic listening environments, listeners can be distributed across the listening area. Any translational change from the central listening position will introduce artefacts which can be challenging to evaluate perceptually. Simulation of a loudspeaker system using non-individualised dynamic binaural synthesis is one solution to this problem. However, the validity in using such systems is not well proven.This thesis measures the limitations of using a non-individualised, dynamic binaural synthesis system to simulate the perception of loudspeaker-based panning methods across the listening area. The binaural simulation system was designed and verified in collaboration with BBC Research and Development. The equivalence of localisation errors caused by loudspeaker-based panning methods between in situ and binaural simulation was measured where it was found that localisation errors were equivalent to a +/-7 degrees boundary in 75% of the spatial audio reproduction systems tested. Results were then compared to a computation localisation model which was adapted to utilise head-rotations. The equivalence of human acuity to sound colouration between in situ and when using non-individualised binaural simulation was measured using colouration detection thresholds from five directions. It was shown that thresholds were equivalent within a +/-4dB equivalence boundary, supporting the use for simulating sound colourations caused by loudspeaker-based panning methods. The binaural system was finally applied to measure the perception of multi-loudspeaker induced colouration artefacts across the listening area. It was found that the central listening position had the lowest perceived colouration. It is also shown that the variation in perceived colouration across the listening area is larger for reverberant reproduction conditions

    Extended Abstracts

    Get PDF
    Presented at the 21st International Conference on Auditory Display (ICAD2015), July 6-10, 2015, Graz, Styria, Austria.Mark Ballora “Two examples of sonification for viewer engagement: Hurricanes and squirrel hibernation cycles” / Stephen Barrass, “ Diagnostic Singing Bowls” / Natasha Barrett, Kristian Nymoen. “Investigations in coarticulated performance gestures using interactive parameter-mapping 3D sonification” / Lapo Boschi, Arthur PatĂ©, Benjamin Holtzman, Jean-LoĂŻc le Carrou. “Can auditory display help us categorize seismic signals?” / CĂ©dric Camier, François-Xavier FĂ©ron, Julien Boissinot, Catherine Guastavino. “Tracking moving sounds: Perception of spatial figures” / Coralie Diatkine, StĂ©phanie Bertet, Miguel Ortiz. “Towards the holistic spatialization of multiple sound sources in 3D, implementation using ambisonics to binaural technique” / S. Maryam FakhrHosseini, Paul Kirby, Myounghoon Jeon. “Regulating Drivers’ Aggressiveness by Sonifying Emotional Data” / Wolfgang Hauer, Katharina Vogt. “Sonification of a streaming-server logfile” / Thomas Hermann, Tobias Hildebrandt, Patrick Langeslag, Stefanie Rinderle-Ma. “Optimizing aesthetics and precision in sonification for peripheral process-monitoring” / Minna Huotilainen, Matti Gröhn, Iikka Yli-Kyyny, Jussi Virkkala, Tiina Paunio. “Sleep Enhancement by Sound Stimulation” / Steven Landry, Jayde Croschere, Myounghoon Jeon. “Subjective Assessment of In-Vehicle Auditory Warnings for Rail Grade Crossings” / Rick McIlraith, Paul Walton, Jude Brereton. “The Spatialised Sonification of Drug-Enzyme Interactions” / George Mihalas, Minodora Andor, Sorin Paralescu, Anca Tudor, Adrian Neagu, Lucian Popescu, Antoanela Naaji. “Adding Sound to Medical Data Representation” / Rainer Mittmannsgruber, Katharina Vogt. “Auditory assistance for timing presentations” / Joseph W. Newbold, Andy Hunt, Jude Brereton. “Chemical Spectral Analysis through Sonification” / S. Camille Peres, Daniel Verona, Paul Ritchey. “The Effects of Various Parameter Combinations in Parameter-Mapping Sonifications: A Pilot Study” / Eva Sjuve. “Metopia: Experiencing Complex Environmental Data Through Sound” / Benjamin Stahl, Katharina Vogt. “The Effect of Audiovisual Congruency on Short-Term Memory of Serial Spatial Stimuli: A Pilot Test” / David Worrall. “Realtime sonification and visualisation of network metadata (The NetSon Project)” / Bernhard Zeller, Katharina Vogt. “Auditory graph evolution by the example of spurious correlations” /The compiled collection of extended abstracts included in the ICAD 2015 Proceedings. Extended abstracts include, but are not limited to, late-breaking results, works in early stages of progress, novel methodologies, unique or controversial theoretical positions, and discussions of unsuccessful research or null findings

    Sonic Choreosophia: a cross-disciplinary investigation on sound and movement practices

    Get PDF
    This thesis is the account of cross-disciplinary research that explores spatial audio experiences in multimodal contexts. The practice of arranging dynamical modifications of spatial attributes of sound to create impressions of movement through sound has been applied to dance choreography and theatre. Using wave field synthesis and ambisonics technologies for spatial audio sound playback, two projects have been created: Stranded (2013), a joint choreography for three dancers and sonic movement in collaboration with choreographer Jalianne Li, and I Hear You See Me (2014), an audiovisual installation featuring participatory theatre, sonic movement, and motion graphics, in collaboration with theatre artist Silvia Mercuriali and visual artist Simon Wilkinson. These works are the outcome of a complex collaborative exchange between the author and the mentioned artists and a comparison at multiple levels (aesthetic, technical, cultural) between the different disciplines involved, and propose alternative reflections about spatial audio composition. For example, the choreographic ideas of Li, the aesthetics and movement studies of Rudolf Laban, the works and writing of choreographers Mary Wigman, Merce Cunningham and Pina Bausch have all been used to evaluate the kinetic power of sonic movement and its strengths measured against the clarity and immediacy of a dancing body. The participatory strategies of Mercuriali’s theatre, the composite works by Len Lye's, Oskar Fischinger's audiovisual experiments, and historical and contemporary examples from kinetic and installation art have all helped to bring forward a further reflection over a shift of function of sound, from essence of a composition to instrument for realising a kinetic idea. Highlighting the necessity of a multimodal context when using spatial audio, but limiting the idea of a Sonic Choreosophia to a simple suggestion, this thesis thus documents a novel approach of using sound to create movement per se, and its potential for further development
    corecore