8,875 research outputs found

    Development of a soundscape simulator tool

    Get PDF
    This paper discusses the development of an interactive soundscape simulator, enabling users to manipulate a series of parameters to investigate if there is group correlation between factors such as source selection, positioning and level. The basis of the simulator stems from fieldwork and recordings carried out in London and Manchester. Through the use of an enhanced version of soundwalking, respondents are led on a walk around an urban space focusing on the soundscape, whilst answering questions in a semi-structured interview. The data collected is then used to inform the ecological validity of the simulator. The laboratory based tests use simulations based on spaces recorded in a series of urban locations, as well as an ‘idealised’ soundscape simulation, featuring data from all recorded locations. The sound sources used are based on user highlighted selections from all locations, based on preferences extracted from soundwalk field data. Preliminary results show the simulator is effective in obtaining numerical data based on subjective choices as well as, effective qualitative data which provides an insight into the reasoning behind the respondents choices. This work forms part of the Positive Soundscape Project

    EigenScape : A Database of Spatial Acoustic Scene Recordings

    Get PDF
    The classification of acoustic scenes and events is an emerging area of research in the field of machine listening. Most of the research conducted so far uses spectral features extracted from monaural or stereophonic audio rather than spatial features extracted from multichannel recordings. This is partly due to the lack thus far of a substantial body of spatial recordings of acoustic scenes. This paper formally introduces EigenScape, a new database of fourth-order Ambisonic recordings of eight different acoustic scene classes. The potential applications of a spatial machine listening system are discussed before detailed information on the recording process and dataset are provided. A baseline spatial classification system using directional audio coding (DirAC) techniques is detailed and results from this classifier are presented. The classifier is shown to give good overall scene classification accuracy across the dataset, with 7 of 8 scenes being classified with an accuracy of greater than 60% with an 11% improvement in overall accuracy compared to use of Mel-frequency cepstral coefficient (MFCC) features. Further analysis of the results shows potential improvements to the classifier. It is concluded that the results validate the new database and show that spatial features can characterise acoustic scenes and as such are worthy of further investigatio

    In Car Audio

    Get PDF
    This chapter presents implementations of advanced in Car Audio Applications. The system is composed by three main different applications regarding the In Car listening and communication experience. Starting from a high level description of the algorithms, several implementations on different levels of hardware abstraction are presented, along with empirical results on both the design process undergone and the performance results achieved

    Three-Dimensional Acoustic Displays In A Museum Employing WFS (Wave Field Synthesis) And HOA (High Order Ambisonics)

    Get PDF
    The paper describes the sound systems and the listening rooms installed in the new "museum of reproduced sound", actually being built in Parma, restoring an ancient church. The museum is devoted to the exposition of a huge collection of antique radios and gramophones, but it will also exploit the frontiers of modern methods for immersive surround reproduction: WFS and HOA. In the main hall, a large planar WFS loudspeaker array is employed for inviting the visitors to enter the world of sound reproduction, providing stunning effects and emotional sounds enveloping them from many directions. At the end of the exposition path, a special HOA space is employed for showing the recent developments of recording/reproduction methods started from the Ambisonics concept, capable of creating natural reproduction of sport events, live music and other immersive acoustical experiences; in this room also a binaural/transaural system is available. A second, larger listening room capable of 30seats is equipped with a horizontal WFS array covering the complete perimeter of the room. The paper describes the technology employed, the problems encountered due to the difficult acoustical conditions (the museum was formerly a church), and the novel software tools developed for the purpose on LINUX platforms

    Advanced automatic mixing tools for music

    Get PDF
    PhDThis thesis presents research on several independent systems that when combined together can generate an automatic sound mix out of an unknown set of multi‐channel inputs. The research explores the possibility of reproducing the mixing decisions of a skilled audio engineer with minimal or no human interaction. The research is restricted to non‐time varying mixes for large room acoustics. This research has applications in dynamic sound music concerts, remote mixing, recording and postproduction as well as live mixing for interactive scenes. Currently, automated mixers are capable of saving a set of static mix scenes that can be loaded for later use, but they lack the ability to adapt to a different room or to a different set of inputs. In other words, they lack the ability to automatically make mixing decisions. The automatic mixer research depicted here distinguishes between the engineering mixing and the subjective mixing contributions. This research aims to automate the technical tasks related to audio mixing while freeing the audio engineer to perform the fine‐tuning involved in generating an aesthetically‐pleasing sound mix. Although the system mainly deals with the technical constraints involved in generating an audio mix, the developed system takes advantage of common practices performed by sound engineers whenever possible. The system also makes use of inter‐dependent channel information for controlling signal processing tasks while aiming to maintain system stability at all times. A working implementation of the system is described and subjective evaluation between a human mix and the automatic mix is used to measure the success of the automatic mixing tools

    Effects of errorless learning on the acquisition of velopharyngeal movement control

    Get PDF
    Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio
    • 

    corecore