3,732 research outputs found

    3D Time-Based Aural Data Representation Using D4 Library’s Layer Based Amplitude Panning Algorithm

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)The following paper introduces a new Layer Based Amplitude Panning algorithm and supporting D4 library of rapid prototyping tools for the 3D time-based data representation using sound. The algorithm is designed to scale and support a broad array of configurations, with particular focus on High Density Loudspeaker Arrays (HDLAs). The supporting rapid prototyping tools are designed to leverage oculocentric strategies to importing, editing, and rendering data, offering an array of innovative approaches to spatial data editing and representation through the use of sound in HDLA scenarios. The ensuing D4 ecosystem aims to address the shortcomings of existing approaches to spatial aural representation of data, offers unique opportunities for furthering research in the spatial data audification and sonification, as well as transportable and scalable spatial media creation and production

    Virtual Audio - Three-Dimensional Audio in Virtual Environments

    Get PDF
    Three-dimensional interactive audio has a variety ofpotential uses in human-machine interfaces. After lagging seriously behind the visual components, the importance of sound is now becoming increas-ingly accepted. This paper mainly discusses background and techniques to implement three-dimensional audio in computer interfaces. A case study of a system for three-dimensional audio, implemented by the author, is described in great detail. The audio system was moreover integrated with a virtual reality system and conclusions on user tests and use of the audio system is presented along with proposals for future work at the end of the paper. The thesis begins with a definition of three-dimensional audio and a survey on the human auditory system to give the reader the needed knowledge of what three-dimensional audio is and how human auditory perception works

    Lost Oscillations: Exploring a City’s Space and Time With an Interactive Auditory Art Installation

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)Lost Oscillations is a spatio-temporal sound art installation that allows users to explore the past and present of a city's soundscape. Participants are positioned in the center of an octophonic speaker array; situated in the middle of the array is a touch-sensitive user interface. The user interface is a stylized representation of a map of Christchurch, New Zealand, with electrodes placed throughout the map. Upon touching an electrode, one of many sound recordings made at the electrode's real-world location is chosen and played; users must stay in contact with the electrodes in order for the sounds to continue playing, requiring commitment from users in order to explore the soundscape. The sound recordings have been chosen to represent Christchurch's development throughout its history, allowing participants to explore the evolution of the city from the early 20th Century through to its post-earthquake reconstruction. This paper discusses the motivations for Lost Oscillations before presenting the installation's design, development, and presentation

    The Effect of Interchannel Time Difference on Localisation in Vertical Stereophony

    Get PDF
    Listening tests were conducted in order to analyse the localisation of band-limited stimuli in vertical stereophony. The test stimuli were seven octave bands of pink noise, with centre frequencies ranging from 125–8000Hz, as well as broadband pink noise. Stimuli were presented from vertically arranged loudspeakers either monophonically or as vertical phantom images, created with the upper loudspeaker delayed with respect to the lower by 0, 0.5, 1, 5 and 10ms (i.e. interchannel time difference). The experimental data obtained showed that localisation under the aforementioned conditions is generally governed by the so-called “pitch-height” effect, with the high frequency stimuli generally being localised significantly higher than the low frequency stimuli for all conditions. The effect of interchannel time difference was found to be significant on localisation judgments for both the 1000-4000Hz octave bands and the broadband pink noise; it is suggested that this was related to the effects of comb filtering. Additionally, no evidence could be found to support the existence of the precedence effect in vertical stereophony

    Using immersive audio and vibration to enhance remote diagnosis of mechanical failure in uncrewed vessels.

    Get PDF
    There is increasing interest in the maritime industry in the potential use of uncrewed vessels to improve the efficiency and safety of maritime operations. This leads to a number of questions relating to the maintenance and repair of mechanical systems, in particular, critical propulsion systems which if a failure occurs could endanger the vessel. While control data is commonly monitored remotely, engineers on board ship also employ a wide variety of sensory feedback such as sound and vibration to diagnose the condition of systems, and these are often not replicated in remote monitoring. In order to assess the potential for enhancement of remote monitoring and diagnosis, this project simulated an engine room (ER) based on a real vessel in Unreal Engine 4 for the HTC ViveTM VR headset. Audio was recorded from the vessel, with mechanical faults synthesized to create a range of simulated failures. In order to simulate operational requirements, the system was remotely fed data from an external server. The system allowed users to view normal control room data, listen to the overall sound of the space presented spatially over loudspeakers, isolate the sound of particular machinery components, and feel the vibration of machinery through a body worn vibration transducer. Users could scroll through a 10-hour time history of system performance, including audio, vibration and data for snapshots at hourly intervals. Seven experienced marine engineers were asked to assess several scenarios for potential faults in different elements of the ER. They were assessed both quantitatively regarding correct fault identification, and qualitatively in order to assess their perception of usability of the system. Users were able to diagnose simulated mechanical failures with a high degree of accuracy, mainly utilising audio and vibration stimuli, and reported specifically that the immersive audio and vibration improved realism and increased their ability to diagnose system failures from a remote location

    Sonic interaction with a virtual orchestra of factory machinery

    Get PDF
    This paper presents an immersive application where users receive sound and visual feedbacks on their interactions with a virtual environment. In this application, the users play the part of conductors of an orchestra of factory machines since each of their actions on interaction devices triggers a pair of visual and audio responses. Audio stimuli were spatialized around the listener. The application was exhibited during the 2013 Science and Music day and designed to be used in a large immersive system with head tracking, shutter glasses and a 10.2 loudspeaker configuration.Comment: Sonic Interaction for Virtual Environments, Minneapolis : United States (2014

    SMART-IÂČ: A Spatial Multi-users Audio-visual Real Time Interactive Interface

    No full text
    International audienceThe SMART-I2 aims at creating a precise and coherent virtual environment by providing users with both audio and visual accurate localization cues. It is known that for audio rendering, Wave Field Synthesis, and for visual rendering, Tracked Stereoscopy, individually permit high quality spatial immersion within an extended space. The proposed system combines these two rendering approaches through the use of a large Multi-Actuator Panel used as both a loudspeaker array and as a projection screen, considerably reducing audio-visual incoherencies. The system performance has been confirmed by an objective validation of the audio interface and a perceptual evaluation of the audio-visual rendering
    • 

    corecore