225 research outputs found

    Evaluation of Psychoacoustic Sound Parameters for Sonification

    Get PDF
    Sonification designers have little theory or experimental evidence to guide the design of data-to-sound mappings. Many mappings use acoustic representations of data values which do not correspond with the listener's perception of how that data value should sound during sonification. This research evaluates data-to-sound mappings that are based on psychoacoustic sensations, in an attempt to move towards using data-to-sound mappings that are aligned with the listener's perception of the data value's auditory connotations. Multiple psychoacoustic parameters were evaluated over two experiments, which were designed in the context of a domain-specific problem - detecting the level of focus of an astronomical image through auditory display. Recommendations for designing sonification systems with psychoacoustic sound parameters are presented based on our results

    Investigating Perceptual Congruence Between Data and Display Dimensions in Sonification

    Get PDF
    The relationships between sounds and their perceived meaning and connotations are complex, making auditory perception an important factor to consider when designing sonification systems. Listeners often have a mental model of how a data variable should sound during sonification and this model is not considered in most data:sound mappings. This can lead to mappings that are difficult to use and can cause confusion. To investigate this issue, we conducted a magnitude estimation experiment to map how roughness, noise and pitch relate to the perceived magnitude of stress, error and danger. These parameters were chosen due to previous findings which suggest perceptual congruency between these auditory sensations and conceptual variables. Results from this experiment show that polarity and scaling preference are dependent on the data:sound mapping. This work provides polarity and scaling values that may be directly utilised by sonification designers to improve auditory displays in areas such as accessible and mobile computing, process-monitoring and biofeedback

    Investigating perceptual congruence between information and sensory parameters in auditory and vibrotactile displays

    Get PDF
    A fundamental interaction between a computer and its user(s) is the transmission of information between the two and there are many situations where it is necessary for this interaction to occur non-visually, such as using sound or vibration. To design successful interactions in these modalities, it is necessary to understand how users perceive mappings between information and acoustic or vibration parameters, so that these parameters can be designed such that they are perceived as congruent. This thesis investigates several data-sound and data-vibration mappings by using psychophysical scaling to understand how users perceive the mappings. It also investigates the impact that using these methods during design has when they are integrated into an auditory or vibrotactile display. To investigate acoustic parameters that may provide more perceptually congruent data-sound mappings, Experiments 1 and 2 explored several psychoacoustic parameters for use in a mapping. These studies found that applying amplitude modulation — or roughness — to a signal, or applying broadband noise to it resulted in performance which were similar to conducting the task visually. Experiments 3 and 4 used scaling methods to map how a user perceived a change in an information parameter, for a given change in an acoustic or vibrotactile parameter. Experiment 3 showed that increases in acoustic parameters that are generally considered undesirable in music were perceived as congruent with information parameters with negative valence such as stress or danger. Experiment 4 found that data-vibration mappings were more generalised — a given increase in a vibrotactile parameter was almost always perceived as an increase in an information parameter — regardless of the valence of the information parameter. Experiments 5 and 6 investigated the impact that using results from the scaling methods used in Experiments 3 and 4 had on users' performance when using an auditory or vibrotactile display. These experiments also explored the impact that the complexity of the context which the display was placed had on user performance. These studies found that using mappings based on scaling results did not significantly impact user's performance with a simple auditory display, but it did reduce response times in a more complex use-case

    Using sound to understand protein sequence data:New sonification algorithms for protein sequences and multiple sequence alignments

    Get PDF
    Funding: This work was supported by the UKRI Biotechnology and Biological Sciences Research Council (BBSRC) grant number BB/M010996/1.Background The use of sound to represent sequence data – sonification – has great potential as an alternative and complement to visual representation, exploiting features of human psychoacoustic intuitions to convey nuance more effectively. We have created five parameter-mapping sonification algorithms that aim to improve knowledge discovery from protein sequences and small protein multiple sequence alignments. For two of these algorithms, we investigated their effectiveness at conveying information. To do this we focussed on subjective assessments of user experience. This entailed a focus group session and survey research by questionnaire of individuals engaged in bioinformatics research. Results For single protein sequences, the success of our sonifications for conveying features was supported by both the survey and focus group findings. For protein multiple sequence alignments, there was limited evidence that the sonifications successfully conveyed information. Additional work is required to identify effective algorithms to render multiple sequence alignment sonification useful to researchers. Feedback from both our survey and focus groups suggests future directions for sonification of multiple alignments: animated visualisation indicating the column in the multiple alignment as the sonification progresses, user control of sequence navigation, and customisation of the sound parameters. Conclusions Sonification approaches undertaken in this work have shown some success in conveying information from protein sequence data. Feedback points out future directions to build on the sonification approaches outlined in this paper. The effectiveness assessment process implemented in this work proved useful, giving detailed feedback and key approaches for improvement based on end-user input. The uptake of similar user experience focussed effectiveness assessments could also help with other areas of bioinformatics, for example in visualisation.Publisher PDFPeer reviewe

    A perceptual sound space for auditory displays based on sung-vowel synthesis

    Get PDF
    When designing displays for the human senses, perceptual spaces are of great importance to give intuitive access to physical attributes. Similar to how perceptual spaces based on hue, saturation, and lightness were constructed for visual color, research has explored perceptual spaces for sounds of a given timbral family based on timbre, brightness, and pitch. To promote an embodied approach to the design of auditory displays, we introduce the Vowel-Type-Pitch (VTP) space, a cylindrical sound space based on human sung vowels, whose timbres can be synthesized by the composition of acoustic formants and can be categorically labeled. Vowels are arranged along the circular dimension, while voice type and pitch of the vowel correspond to the remaining two axes of the cylindrical VTP space. The decoupling and perceptual effectiveness of the three dimensions of the VTP space are tested through a vowel labeling experiment, whose results are visualized as maps on circular slices of the VTP cylinder. We discuss implications for the design of auditory and multi-sensory displays that account for human perceptual capabilities

    Perceptual sound field synthesis concept for music presentation

    Get PDF
    A perceptual sound field synthesis approach for music is presented. Its signal processing implements critical bands, the precedence effect and integration times of the auditory system by technical means, as well as the radiation characteristics of musical instruments. Furthermore, interaural coherence, masking and auditory scene analysis principles are considered. As a result, the conceptualized sound field synthesis system creates a natural, spatial sound impression for listeners in extended listening area, even with a low number of loudspeakers. A novel technique, the “precedence fade”, as well as the interaural cues provided by the sound field synthesis approach, allow for a precise and robust localization.Simulations and a listening test provide a proof of concept. The method is particularly robust for signals with impulsive attacks and long quasi-stationary phases, as in the case of many instrumental sounds. It is compatible with many loudspeaker setups, such as 5.1 to 22.2, ambisonics systems and loudspeaker arrays for wave front synthesis. The perceptual sound field synthesis approach is an alternative to physically centered wave field synthesis concepts and conventional, perceptually motivated stereophonic sound and benefits from both paradigms
    • 

    corecore