3,864 research outputs found

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Sound localization accuracy in the blind population

    Get PDF
    The ability to accurately locate a sound source is crucial in the blind population to orient and mobilize independently in the environment. Sound localization is accomplished by the detection of binaural differences in intensity and time of incoming sound waves along with phase differences and spectral cues. It is dependent on auditory sensitivity and processing. However, localization ability can not be predicted from the audiogram or an auditory processing evaluation. Auditory information is not received only from objects making sound, but also from objects reflecting sound. Auditory information used in this manner is called echolocation. Echolocation significantly enhances localization in the absence of vision. Research has shown that echolocation is an important form of localization used by the blind to facilitate independent mobility. However, the ability to localize sound is not evaluated in the blind population. Due to the importance of localization and echolocation for independent mobility in the blind, it would seem appropriate to evaluate the accuracy of this skill set. Echolocation is dependent upon the same auditory processes as localization. More specifically, localization is a precursor to echolocation. Therefore, localization ability will be evaluated in two normal hearing groups, a young normal vision population and young blind population. Both groups will have normal hearing and auditory processing verified by an audiological evaluation that includes a central auditory screening. The localization assessment will be performed using a 24-speaker array in a sound treated chamber with four different testing conditions (1) low-pass broadband stimuli in quiet, (2) low-pass broadband stimuli in noise, (3) high-pass broadband stimuli in quiet, and (4) high-pass broadband speech stimuli in noise. It is hypothesized that blind individuals may exhibit keener localization skills than their normal vision counterparts, particularly if they are experienced, independent travelers. Results of this study may lead to future research in localization assessment, and possibly localization training for blind individuals

    Aspects of spatiotemporal integration in bat sonar

    Get PDF
    Bat sonar is an active sense that is based on the common mammalian auditory system. Bats emit echolocation calls in the high frequency range and extract information about their surroundings by listening to the returning echoes. These echoes carry information, like spatial cues, about object location in the three-dimensional space (azimuth, elevation, and distance). Distance information, for example, is obtained from temporal cues as the interval between the emission of an echolocation call and the returning echo (echo delay). But echoes also carry information about spatial object properties like shape, orientation, or size (in terms of its height, width, and depth). To achieve a reliable internal representation of the environment, bats need to integrate spatial and temporal echo information. In this cumulative thesis different aspects of spatiotemporal integration in bat sonar were addressed, beginning with the perception and neural encoding of object size. Object width as size relevant dimension is encoded by the intensity of its echo. Additionally, the sonar aperture (the naturally co-varying spread of angles of incidence from which the echoes impinge on the ears) co-varies proportionally. In the first study, using a combined psychophysical and electrophysical approach (including the presentation of virtual objects), it was investigated which of both acoustic cues echolocating bats (Phyllostomus discolor) employ for the estimation of object width. Interestingly, the results showed that bats can discriminate object width by only using sonar-aperture information. This was reflected in the responses of a population of units in the auditory midbrain and cortex that responded strongest to echoes from objects with a specific sonar aperture, independent of variations in echo intensity. The study revealed that the sonar aperture is a behaviorally relevant and reliably encoded spatial perceptual cue for object size. It furthermore supported the theory that the mammalian central nervous system is principally aiming to find modality independent representation of spatial object properties. We therefore suggested that the sonar aperture, as an echo acoustic equivalent of the visual aperture (also referred to as the visual angle), could be one of these object properties. In the visual system object size is encoded by the visual aperture as the extent of the image on the retina. It depends on object distance that is not explicitly encoded. Thus, for reliable size perception at different distances, higher computational mechanisms are needed. This phenomenon is termed ‘size constancy’ or ‘size-distance invariance’ and is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. But in echolocating bats object width (sonar aperture) and object distance (echo delay) are accurately perceived and explicitly neurally encoded. In the second study we investigated whether bats show the ability to spontaneously combine spatial and temporal cues to determine absolute width information in terms of sonar size constancy (SSC). This was addressed by using the same setup and species as in the psychophysical approach of the first study. As a result SSC could not be verified as an important feature of sonar perception in bats. This lack of SSC could result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is thinkable that familiarity with a behaviorally relevant, conspicuous object is required, as it was discussed for visual size constancy. But size constancy is found in many sensory modalities and more importantly, SSC was recently found in a blind human echolocator. It was discussed to be based on the same spatial and temporal cues as presented in our study. Thus, this topic should be readdressed in bats in a more natural context as size constancy could be a general mechanism for object normalization. As the spatiotemporal layout of the environment and the objects within changes with locomotion, in the third study the spatiotemporal integration in bat biosonar in a natural and naturalistic context was addressed. Trawling bats species hunt above water and capture fish or insects directly from or close to the surface. Here water acts as an acoustic mirror that can reduce clutter by reflecting sonar emissions away from the bats. However, objects on the water lead to echo enhancement. In a combined laboratory and field study we tested and quantified the effect of different surface types with different reflection properties (smooth and clutter surface) and object height on object detection and discrimination in the trawling bat species, Myotis daubentonii. The bats had to detect a mealworm presented above these different surfaces and discriminate it from an inedible PVC disk. At low heights above the clutter surface, the bats’ detection performance was worse than above a smooth surface. At a height of 50 cm, the surface structure had no influence on target detection. Above the clutter surface, object discrimination decreased with decreasing height. The study revealed different perceptual strategies that could allow efficient object detection and discrimination. When approaching objects above clutter, echolocation calls showed a significantly higher peak frequency, eventually suggesting a strategy for temporal separation of object echoes from clutter. Flight-path reconstruction showed that the bats attacked objects from below over water but from above over clutter. These results are consistent with the hypothesis that trawling bats exploit an echo-acoustic ground effect, in terms of a spatiotemporal integration of direct object reflections with indirect reflections from the water surface. It could lead to optimized prey-detection and discrimination not only for prey on the water but also above. Additionally, the bats could employ a precedence-like strategy to avoid misleading spatial cues that signal the wrong object elevation by using only the first and therewith direct echo for object localization

    Studies on binaural and monaural signal analysis methods and applications

    Get PDF
    Sound signals can contain a lot of information about the environment and the sound sources present in it. This thesis presents novel contributions to the analysis of binaural and monaural sound signals. Some new applications are introduced in this work, but the emphasis is on analysis methods. The three main topics of the thesis are computational estimation of sound source distance, analysis of binaural room impulse responses, and applications intended for augmented reality audio. A novel method for binaural sound source distance estimation is proposed. The method is based on learning the coherence between the sounds entering the left and right ears. Comparisons to an earlier approach are also made. It is shown that these kinds of learning methods can correctly recognize the distance of a speech sound source in most cases. Methods for analyzing binaural room impulse responses are investigated. These methods are able to locate the early reflections in time and also to estimate their directions of arrival. This challenging problem could not be tackled completely, but this part of the work is an important step towards accurate estimation of the individual early reflections from a binaural room impulse response. As the third part of the thesis, applications of sound signal analysis are studied. The most notable contributions are a novel eyes-free user interface controlled by finger snaps, and an investigation on the importance of features in audio surveillance. The results of this thesis are steps towards building machines that can obtain information on the surrounding environment based on sound. In particular, the research into sound source distance estimation functions as important basic research in this area. The applications presented could be valuable in future telecommunications scenarios, such as augmented reality audio

    Auditory Displays and Assistive Technologies: the use of head movements by visually impaired individuals and their implementation in binaural interfaces

    Get PDF
    Visually impaired people rely upon audition for a variety of purposes, among these are the use of sound to identify the position of objects in their surrounding environment. This is limited not just to localising sound emitting objects, but also obstacles and environmental boundaries, thanks to their ability to extract information from reverberation and sound reflections- all of which can contribute to effective and safe navigation, as well as serving a function in certain assistive technologies thanks to the advent of binaural auditory virtual reality. It is known that head movements in the presence of sound elicit changes in the acoustical signals which arrive at each ear, and these changes can improve common auditory localisation problems in headphone-based auditory virtual reality, such as front-to-back reversals. The goal of the work presented here is to investigate whether the visually impaired naturally engage head movement to facilitate auditory perception and to what extent it may be applicable to the design of virtual auditory assistive technology. Three novel experiments are presented; a field study of head movement behaviour during navigation, a questionnaire assessing the self-reported use of head movement in auditory perception by visually impaired individuals (each comparing visually impaired and sighted participants) and an acoustical analysis of inter-aural differences and cross- correlations as a function of head angle and sound source distance. It is found that visually impaired people self-report using head movement for auditory distance perception. This is supported by head movements observed during the field study, whilst the acoustical analysis showed that interaural correlations for sound sources within 5m of the listener were reduced as head angle or distance to sound source were increased, and that interaural differences and correlations in reflected sound were generally lower than that of direct sound. Subsequently, relevant guidelines for designers of assistive auditory virtual reality are proposed

    Exploring the use of speech in audiology: A mixed methods study

    Get PDF
    This thesis aims to advance the understanding of how speech testing is, and can be, used for hearing device users within the audiological test battery. To address this, I engaged with clinicians and patients to understand the current role that speech testing plays in audiological testing in the UK, and developed a new listening test, which combined speech testing with localisation judgments in a dual task design. Normal hearing listeners and hearing aid users were tested, and a series of technical measurements were made to understand how advanced hearing aid settings might determine task performance. A questionnaire was completed by public and private sector hearing healthcare professionals in the UK to explore the use of speech testing. Overall, results revealed this assessment tool was underutilised by UK clinicians, but there was a significantly greater use in the private sector. Through a focus group and semi structured interviews with hearing aid users I identified a mismatch between their common listening difficulties and the assessment tools used in audiology and highlighted a lack of deaf awareness in UK adult audiology. The Spatial Speech in Noise Test (SSiN) is a dual task paradigm to simultaneously assess relative localisation and word identification performance. Testing on normal hearing listeners to investigate the impact of the dual task design found the SSiN to increase cognitive load and therefore better reflect challenging listening situations. A comparison of relative localisation and word identification performance showed that hearing aid users benefitted less from spatially separating speech and noise in the SSiN than normal hearing listeners. To investigate how the SSiN could be used to assess advanced hearing aid features, a subset of hearing aid users were fitted with the same hearing aid type and completed the SSiN once with adaptive directionality and once with omnidirectionality. The SSiN results differed between conditions but a larger sample size is needed to confirm these effects. Hearing aid technical measurements were used to quantify how hearing aid output changed in response to the SSiN paradigm
    • …
    corecore