10 research outputs found

    Optimizing the spatial configuration of a seven-talker speech display

    Get PDF
    Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.Although there is substantial evidence that performance in multitalker listening tasks can be improved by spatially separating the apparent locations of the competing talkers, very little effort has been made to determine the best locations and presentation levels for the talkers in a multichannel speech display. In this experiment, a call-sign based color and number identification task was used to evaluate the effectiveness of three different spatial configurations and two different level normalization schemes in a sevenchannel binaural speech display. When only two spatially-adjacent channels of the seven-channel system were active, overall performance was substantially better with a geometrically-spaced spatial configuration (with far-field talkers at -90 , -30 , -10 , 0 , +10 , +30 , and +90 azimuth) or a hybrid near-far configuration (with far-field talkers at -90 , -30 , 0 , +30 , and +90 azimuth and near-field talkers at 90 ) than with a more conventional linearlyspaced configuration (with far-field talkers at -90 , -60 , -30 , 0 , +30 , +60 , and +90 azimuth). When all seven channels were active, performance was generally better with a ``better-ear'' normalization scheme that equalized the levels of the talkers in the more intense ear than with a default normalization scheme that equalized the levels of the talkers at the center of the head. The best overall performance in the seven-talker task occurred when the hybrid near-far spatial configuration was combined with the better-ear normalization scheme. This combination resulted in a 20% increase in the number of correct identifications relative to the baseline condition with linearly-spaced talker locations and no level normalization. Although this is a relatively modest improvement, it should be noted that it could be achieved at little or no cost simply by reconfiguring the HRTFs used in a multitalker speech display

    Spatial Auditory Display: Comments on ShinnCunningham et al

    Get PDF
    ________________________________________________________________________ Spatial auditory displays have received a great deal of attention in the community investigating how to present information through sound. This short commentary discusses our 2001 ICAD paper (Shinn-Cunningham, Streeter, and Gyss), which explored whether it is possible to provide enhanced spatial auditory information in an auditory display. The discussion provides some historical context and discusses how work on representing information in spatial auditory displays has progressed over the last five years. HISTORICAL CONTEXT The next time you find yourself in a noisy, crowded environment like a cocktail party, plug one ear. Suddenly, your ability to sort out and understand the sounds in the environment collapses. This simple demonstration of the importance of spatial hearing to everyday behavior has motivated research in spatial auditory processing for decades. Perhaps unsurprisingly, spatial auditory displays have received a great deal of attention in the ICAD community. Sound source location is one stimulus attribute that can be easily manipulated; thus, spatial information can be used to represent arbitrary information in an auditory display. In addition to being used directly to encode data in an auditory display, spatial cues also are important in allowing a listener to focus attention on a source of interest when there are multiple sound sources competing for auditory attention . Although it is theoretically easy to produce accurate spatial cues in an auditory display, the signal processing required to render natural spatial cues in real time (and the amount of care required to render realistic cues) is prohibitive even with current technologies. Given both the important role that spatial auditory information can play in conveying acoustic information to a listener and the practical difficulties encountered when trying to include realistic spatial cues in a display, spatial auditory perception and technologies for rendering virtual auditory space have both been wellrepresented areas of research at every ICAD conference held to date (e.g., see Even with a good virtual auditory display, the amount of spatial auditory information that a listener can extract is limited compared to other senses. For instance, auditory localization accuracy is orders of magnitude worse than visual spatial resolution. The study reprinted here, originally reported at ICAD 2001, was motivated by a desire to increase the amount of spatial information a listener could extract from a virtual auditory display. The original idea was to see if spatial resolution could be improved in a virtual auditory display by emphasizing spatial acoustic cues. The questions we were interested in were: 1) Can listeners learn to accommodate a new mapping between exocentric location and acoustic cues, so that they do not mislocalize sounds after training? and 2) Do such remappings lead to improved spatial resolution, or is there some other factor limiting performance? RESEARCH PROCESS The reprinted study was designed to test a model that accounted for results from previous experiments investigating remapped spatial cues. The model predicted that spatial performance is restricted by central memory constraints, not by a low-level sensory limitation on spatial auditory resolution. However, the model failed for the experiments reported: listeners actually achieved better-than-normal spatial resolution following training with the remapped auditory cues (unlike in any previous studies). These results were encouraging on the one hand, as they suggested

    Developing Models for Multi-Talker Listening Tasks using the EPIC Architecture: Wrong Turns and Lessons Learned

    Full text link
    This report describes the development of a series of computational cognitive architecture models for the multi-channel listening task studied in the fields of audition and human performance. The models can account for the phenomena in which humans can respond to a designated spoken message in the context of multiple simultaneous speech messages from multiple speakers - the so-called "cocktail party effect." They are the first models of a new class that combine psychoacoustic perceptual mechanisms with production-system cognitive processing to account for the end-to-end performance in an important empirical literature.Office of Naval Research, Cognitive Science Program, under grant numbers N00014-10-1-0152 and N00014-13-1-0358, and the U. S. Air Force 711 HW Chief Scientist Seedling programhttp://deepblue.lib.umich.edu/bitstream/2027.42/108165/1/Kieras_Wakefield_TR_EPIC_17_July_2014.pdf-1Description of Kieras_Wakefield_TR_EPIC_17_July_2014.pdf : Technical report conten

    Assessment of Audio Interfaces for use in Smartphone Based Spatial Learning Systems for the Blind

    Get PDF
    Recent advancements in the field of indoor positioning and mobile computing promise development of smart phone based indoor navigation systems. Currently, the preliminary implementations of such systems only use visual interfaces—meaning that they are inaccessible to blind and low vision users. According to the World Health Organization, about 39 million people in the world are blind. This necessitates the need for development and evaluation of non-visual interfaces for indoor navigation systems that support safe and efficient spatial learning and navigation behavior. This thesis research has empirically evaluated several different approaches through which spatial information about the environment can be conveyed through audio. In the first experiment, blindfolded participants standing at an origin in a lab learned the distance and azimuth of target objects that were specified by four audio modes. The first three modes were perceptual interfaces and did not require cognitive mediation on the part of the user. The fourth mode was a non-perceptual mode where object descriptions were given via spatial language using clockface angles. After learning the targets through the four modes, the participants spatially updated the position of the targets and localized them by walking to each of them from two indirect waypoints. The results also indicate hand motion triggered mode to be better than the head motion triggered mode and comparable to auditory snapshot. In the second experiment, blindfolded participants learned target object arrays with two spatial audio modes and a visual mode. In the first mode, head tracking was enabled, whereas in the second mode hand tracking was enabled. In the third mode, serving as a control, the participants were allowed to learn the targets visually. We again compared spatial updating performance with these modes and found no significant performance differences between modes. These results indicate that we can develop 3D audio interfaces on sensor rich off the shelf smartphone devices, without the need of expensive head tracking hardware. Finally, a third study, evaluated room layout learning performance by blindfolded participants with an android smartphone. Three perceptual and one non-perceptual mode were tested for cognitive map development. As expected the perceptual interfaces performed significantly better than the non-perceptual language based mode in an allocentric pointing judgment and in overall subjective rating. In sum, the perceptual interfaces led to better spatial learning performance and higher user ratings. Also there is no significant difference in a cognitive map developed through spatial audio based on tracking user’s head or hand. These results have important implications as they support development of accessible perceptually driven interfaces for smartphones

    Tactile Working Memory And Multimodal Loading

    Get PDF
    This work explored the role of spatial grouping, set size, and stimulus probe modality using a recall task for visual, auditory, and tactile information. The effects of different working memory (WM) loading task modalities were also examined. The Gestalt spatial organizing principle of grouping showed improvements in response times for visual and tactile stimulus probes with large set sizes and apparently allowed participants to effectively chunk the information. This research suggests that tactile information may use spatial characteristics typically associated with visual information, as well as sequential characteristics normally associated with verbal information. Based on these results, a reformulation of WM is warranted to remove the constraints of the input modality on processing types. The input modalities appear to access both a spatial sketchpad and a temporally-based sequence loop. Implications for multisensory research and display design are discussed

    Optimizing the spatial configuration of a seven-talker speech display

    No full text
    Although there is substantial evidence that performance in multitalker listening tasks can be improved by spatially separating the apparent locations of the competing talkers, very little effort has been made to determine the best locations and presentation levels for the talkers in a multichannel speech display. In this experiment, a call-sign based color and number identification task was used to evaluate the effectiveness of three different spatial configurations and two different level normalization schemes in a sevenchannel binaural speech display. When only two spatially-adjacent channels of the seven-channel system were active, overall performance was substantially better with a geometrically-spaced spatial configuration (with far-field talkers at-90,-30,-10,0, +10, +30, and +90 azimuth) or a hybrid near-far configuration (with far-field talkers at-90,-30,0, +30, and +90 azimuth and near-field talkers at 90) than with a more conventional linearlyspaced configuration (with far-field talkers at-90,-60,-30,0, +30, +60, and +90 azimuth). When all seven channels were active, performance was generally better with a “better-ear ” normalization scheme that equalized the levels of the talkers in the more intense ear than with a default normalization scheme that equalized the levels of the talkers at the center of the head. The best overall performance in the seven-talker task occurred when the hybrid near-far spatial configuration was combined with the better-ear normalization scheme. This combination resulted in a 20 % increase in the number of correct identifications relative to the baseline condition with linearly-spaced talker locations and no level normalization. Although this is a relatively modest improvement, it should be noted that it could be achieved at little or no cost simply by reconfiguring the HRTFs used in a multitalker speech display. 1

    Spatial Hearing with Simultaneous Sound Sources: A Psychophysical Investigation

    Get PDF
    This thesis provides an overview of work conducted to investigate human spatial hearing in situations involving multiple concurrent sound sources. Much is known about spatial hearing with single sound sources, including the acoustic cues to source location and the accuracy of localisation under different conditions. However, more recently interest has grown in the behaviour of listeners in more complex environments. Concurrent sound sources pose a particularly difficult problem for the auditory system, as their identities and locations must be extracted from a common set of sensory receptors and shared computational machinery. It is clear that humans have a rich perception of their auditory world, but just how concurrent sounds are processed, and how accurately, are issues that are poorly understood. This work attempts to fill a gap in our understanding by systematically examining spatial resolution with multiple sound sources. A series of psychophysical experiments was conducted on listeners with normal hearing to measure performance in spatial localisation and discrimination tasks involving more than one source. The general approach was to present sources that overlapped in both frequency and time in order to observe performance in the most challenging of situations. Furthermore, the role of two primary sets of location cues in concurrent source listening was probed by examining performance in different spatial dimensions. The binaural cues arise due to the separation of the two ears, and provide information about the lateral position of sound sources. The spectral cues result from location-dependent filtering by the head and pinnae, and allow vertical and front-rear auditory discrimination. Two sets of experiments are described that employed relatively simple broadband noise stimuli. In the first of these, two-point discrimination thresholds were measured using simultaneous noise bursts. It was found that the pair could be resolved only if a binaural difference was present; spectral cues did not appear to be sufficient. In the second set of experiments, the two stimuli were made distinguishable on the basis of their temporal envelopes, and the localisation of a designated target source was directly examined. Remarkably robust localisation was observed, despite the simultaneous masker, and both binaural and spectral cues appeared to be of use in this case. Small but persistent errors were observed, which in the lateral dimension represented a systematic shift away from the location of the masker. The errors can be explained by interference in the processing of the different location cues. Overall these experiments demonstrated that the spatial perception of concurrent sound sources is highly dependent on stimulus characteristics and configurations. This suggests that the underlying spatial representations are limited by the accuracy with which acoustic spatial cues can be extracted from a mixed signal. Three sets of experiments are then described that examined spatial performance with speech, a complex natural sound. The first measured how well speech is localised in isolation. This work demonstrated that speech contains high-frequency energy that is essential for accurate three-dimensional localisation. In the second set of experiments, spatial resolution for concurrent monosyllabic words was examined using similar approaches to those used for the concurrent noise experiments. It was found that resolution for concurrent speech stimuli was similar to resolution for concurrent noise stimuli. Importantly, listeners were limited in their ability to concurrently process the location-dependent spectral cues associated with two brief speech sources. In the final set of experiments, the role of spatial hearing was examined in a more relevant setting containing concurrent streams of sentence speech. It has long been known that binaural differences can aid segregation and enhance selective attention in such situations. The results presented here confirmed this finding and extended it to show that the spectral cues associated with different locations can also contribute. As a whole, this work provides an in-depth examination of spatial performance in concurrent source situations and delineates some of the limitations of this process. In general, spatial accuracy with concurrent sources is poorer than with single sound sources, as both binaural and spectral cues are subject to interference. Nonetheless, binaural cues are quite robust for representing concurrent source locations, and spectral cues can enhance spatial listening in many situations. The findings also highlight the intricate relationship that exists between spatial hearing, auditory object processing, and the allocation of attention in complex environments

    Spatial Hearing with Simultaneous Sound Sources: A Psychophysical Investigation

    Get PDF
    This thesis provides an overview of work conducted to investigate human spatial hearing in situations involving multiple concurrent sound sources. Much is known about spatial hearing with single sound sources, including the acoustic cues to source location and the accuracy of localisation under different conditions. However, more recently interest has grown in the behaviour of listeners in more complex environments. Concurrent sound sources pose a particularly difficult problem for the auditory system, as their identities and locations must be extracted from a common set of sensory receptors and shared computational machinery. It is clear that humans have a rich perception of their auditory world, but just how concurrent sounds are processed, and how accurately, are issues that are poorly understood. This work attempts to fill a gap in our understanding by systematically examining spatial resolution with multiple sound sources. A series of psychophysical experiments was conducted on listeners with normal hearing to measure performance in spatial localisation and discrimination tasks involving more than one source. The general approach was to present sources that overlapped in both frequency and time in order to observe performance in the most challenging of situations. Furthermore, the role of two primary sets of location cues in concurrent source listening was probed by examining performance in different spatial dimensions. The binaural cues arise due to the separation of the two ears, and provide information about the lateral position of sound sources. The spectral cues result from location-dependent filtering by the head and pinnae, and allow vertical and front-rear auditory discrimination. Two sets of experiments are described that employed relatively simple broadband noise stimuli. In the first of these, two-point discrimination thresholds were measured using simultaneous noise bursts. It was found that the pair could be resolved only if a binaural difference was present; spectral cues did not appear to be sufficient. In the second set of experiments, the two stimuli were made distinguishable on the basis of their temporal envelopes, and the localisation of a designated target source was directly examined. Remarkably robust localisation was observed, despite the simultaneous masker, and both binaural and spectral cues appeared to be of use in this case. Small but persistent errors were observed, which in the lateral dimension represented a systematic shift away from the location of the masker. The errors can be explained by interference in the processing of the different location cues. Overall these experiments demonstrated that the spatial perception of concurrent sound sources is highly dependent on stimulus characteristics and configurations. This suggests that the underlying spatial representations are limited by the accuracy with which acoustic spatial cues can be extracted from a mixed signal. Three sets of experiments are then described that examined spatial performance with speech, a complex natural sound. The first measured how well speech is localised in isolation. This work demonstrated that speech contains high-frequency energy that is essential for accurate three-dimensional localisation. In the second set of experiments, spatial resolution for concurrent monosyllabic words was examined using similar approaches to those used for the concurrent noise experiments. It was found that resolution for concurrent speech stimuli was similar to resolution for concurrent noise stimuli. Importantly, listeners were limited in their ability to concurrently process the location-dependent spectral cues associated with two brief speech sources. In the final set of experiments, the role of spatial hearing was examined in a more relevant setting containing concurrent streams of sentence speech. It has long been known that binaural differences can aid segregation and enhance selective attention in such situations. The results presented here confirmed this finding and extended it to show that the spectral cues associated with different locations can also contribute. As a whole, this work provides an in-depth examination of spatial performance in concurrent source situations and delineates some of the limitations of this process. In general, spatial accuracy with concurrent sources is poorer than with single sound sources, as both binaural and spectral cues are subject to interference. Nonetheless, binaural cues are quite robust for representing concurrent source locations, and spectral cues can enhance spatial listening in many situations. The findings also highlight the intricate relationship that exists between spatial hearing, auditory object processing, and the allocation of attention in complex environments

    Clique: Perceptually Based, Task Oriented Auditory Display for GUI Applications

    Get PDF
    Screen reading is the prevalent approach for presenting graphical desktop applications in audio. The primary function of a screen reader is to describe what the user encounters when interacting with a graphical user interface (GUI). This straightforward method allows people with visual impairments to hear exactly what is on the screen, but with significant usability problems in a multitasking environment. Screen reader users must infer the state of on-going tasks spanning multiple graphical windows from a single, serial stream of speech. In this dissertation, I explore a new approach to enabling auditory display of GUI programs. With this method, the display describes concurrent application tasks using a small set of simultaneous speech and sound streams. The user listens to and interacts solely with this display, never with the underlying graphical interfaces. Scripts support this level of adaption by mapping GUI components to task definitions. Evaluation of this approach shows improvements in user efficiency, satisfaction, and understanding with little development effort. To develop this method, I studied the literature on existing auditory displays, working user behavior, and theories of human auditory perception and processing. I then conducted a user study to observe problems encountered and techniques employed by users interacting with an ideal auditory display: another human being. Based on my findings, I designed and implemented a prototype auditory display, called Clique, along with scripts adapting seven GUI applications. I concluded my work by conducting a variety of evaluations on Clique. The results of these studies show the following benefits of Clique over the state of the art for users with visual impairments (1-5) and mobile sighted users (6): 1. Faster, accurate access to speech utterances through concurrent speech streams. 2. Better awareness of peripheral information via concurrent speech and sound streams. 3. Increased information bandwidth through concurrent streams. 4. More efficient information seeking enabled by ubiquitous tools for browsing and searching. 5. Greater accuracy in describing unfamiliar applications learned using a consistent, task-based user interface. 6. Faster completion of email tasks in a standard GUI after exposure to those tasks in audio
    corecore