346 research outputs found

    Understanding concurrent earcons: applying auditory scene analysis principles to concurrent earcon recognition

    Get PDF
    Two investigations into the identification of concurrently presented, structured sounds, called earcons were carried out. One of the experiments investigated how varying the number of concurrently presented earcons affected their identification. It was found that varying the number had a significant effect on the proportion of earcons identified. Reducing the number of concurrently presented earcons lead to a general increase in the proportion of presented earcons successfully identified. The second experiment investigated how modifying the earcons and their presentation, using techniques influenced by auditory scene analysis, affected earcon identification. It was found that both modifying the earcons such that each was presented with a unique timbre, and altering their presentation such that there was a 300 ms onset-to-onset time delay between each earcon were found to significantly increase identification. Guidelines were drawn from this work to assist future interface designers when incorporating concurrently presented earcons

    Flying by Ear: Blind Flight with a Music-Based Artificial Horizon

    Get PDF
    Two experiments were conducted in actual flight operations to evaluate an audio artificial horizon display that imposed aircraft attitude information on pilot-selected music. The first experiment examined a pilot's ability to identify, with vision obscured, a change in aircraft roll or pitch, with and without the audio artificial horizon display. The results suggest that the audio horizon display improves the accuracy of attitude identification overall, but differentially affects response time across conditions. In the second experiment, subject pilots performed recoveries from displaced aircraft attitudes using either standard visual instruments, or, with vision obscured, the audio artificial horizon display. The results suggest that subjects were able to maneuver the aircraft to within its safety envelope. Overall, pilots were able to benefit from the display, suggesting that such a display could help to improve overall safety in general aviation

    The Ursinus Weekly, May 7, 1962

    Get PDF
    Curtain Club\u27s The Girls in 509 scheduled Friday and Saturday • Fraser & Dingman selected head soph rulers to enliven \u2762 frosh customs program • Canterbury Club features speaker, CBS church film • Awards highlight waiters\u27 banquet • Dennis recovering from heart attack • Moyer attends ISC meeting at E-town • World traveler slated to address IRC tonight • Yippee-i-o theme of Spring Festival featuring queen, festivities Saturday • Campus politicers reconcile riffs at annual banquet • Hudnut & students plan jazz seminar • Cub & Key Society meets at Staigers • Editorial: Beauty versus popularity; Turning the tables • Shenanigan \u2762 senior show theme • Dozing technique requires practice • Sesquicentennial fetes continue in nearby Norristown • Obscurity, neglect & confusion mark UC\u27s undeveloped college museum • Tennismen win over LaSalle, drop PMC heartbreaker • Lacrossers swamp West Chesterettes • Baseballers smash Haverford, F&M, lose to E-Towners • Trackmen trounce PMC & Dickinson, upset by Hopkins • UC & West Chester swap wins to open softball season • Greek gleaningshttps://digitalcommons.ursinus.edu/weekly/1319/thumbnail.jp

    Distance information transmission using first order reflections

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 114-116).by Douglas S. Brungart.M.S

    Detection and localization of speech in the presence of competing speech signals

    Get PDF
    Presented at the 12th International Conference on Auditory Display (ICAD), London, UK, June 20-23, 2006.Auditory displays are often used to convey important information in complex operational environments. One problem with these displays is that potentially critical information can be corrupted or lost when multiple warning sounds are presented at the same time. In this experiment, we examined a listener's ability to detect and localize a target speech token in the presence of from 1 to 5 simultaneous competing speech tokens. Two conditions were examined: a condition in which all of the speech tokens were presented from the same location (the `co-located' condition) and a condition in which the speech tokens were presented from different random locations (the `spatially separated' condition). The results suggest that both detection and localization degrade as the number of competing sounds increases. However, the changes in detection performance were found to be surprisingly small and there appeared to be little or no benefit of spatial separation for detection. Localization, on the other hand, was found to degrade substantially and systematically as the number of competing speech tokens increased. Overall, these results suggest that listeners are able to extract substantial information from these speech tokens even when the target is presented with 5 competing simultaneous sounds

    Optimizing the spatial configuration of a seven-talker speech display

    Get PDF
    Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.Although there is substantial evidence that performance in multitalker listening tasks can be improved by spatially separating the apparent locations of the competing talkers, very little effort has been made to determine the best locations and presentation levels for the talkers in a multichannel speech display. In this experiment, a call-sign based color and number identification task was used to evaluate the effectiveness of three different spatial configurations and two different level normalization schemes in a sevenchannel binaural speech display. When only two spatially-adjacent channels of the seven-channel system were active, overall performance was substantially better with a geometrically-spaced spatial configuration (with far-field talkers at -90 , -30 , -10 , 0 , +10 , +30 , and +90 azimuth) or a hybrid near-far configuration (with far-field talkers at -90 , -30 , 0 , +30 , and +90 azimuth and near-field talkers at 90 ) than with a more conventional linearlyspaced configuration (with far-field talkers at -90 , -60 , -30 , 0 , +30 , +60 , and +90 azimuth). When all seven channels were active, performance was generally better with a ``better-ear'' normalization scheme that equalized the levels of the talkers in the more intense ear than with a default normalization scheme that equalized the levels of the talkers at the center of the head. The best overall performance in the seven-talker task occurred when the hybrid near-far spatial configuration was combined with the better-ear normalization scheme. This combination resulted in a 20% increase in the number of correct identifications relative to the baseline condition with linearly-spaced talker locations and no level normalization. Although this is a relatively modest improvement, it should be noted that it could be achieved at little or no cost simply by reconfiguring the HRTFs used in a multitalker speech display

    Distance information transmission using first‐order reflections

    Full text link

    On the Emergence and Awareness of Auditory Objects

    Get PDF
    How do humans successfully navigate the sounds of music and the voice of a friend in the midst of a noisy cocktail party? Two recent articles inPLoS Biology provide psychoacoustic and neuronal clues about where to search for the answers

    Egocentric and allocentric representations in auditory cortex

    Get PDF
    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position
    corecore