14 research outputs found

    Collaboratively Identifying and Referring to Sounds with Words and Phrases

    Get PDF
    Presented at the 20th International Conference on Auditory Display (ICAD2014), June 22-25, 2014, New York, NY.Machine classification of underwater sounds remains an important focus of U.S. Naval research due to physical and environmental factors that increase false alarm rates. Human operators tend to be reliably better at this auditory task than automated methods, but the attentional properties of this cognitive discrimination skill are not well understood. In the study presented here, pairs of isolated listeners, who were only allowed to talk to each other, were given a collaborative soundordering task in which only words and phrases could be used to refer to and identify a set of impulsive sonar echoes. The outcome supports the premise that verbal descriptions of unfamiliar sounds are often difficult for listeners to immediately grasp. The method of “collaborative referring” used in the study is proposed as new technique for obtaining a verified perceptual vocabulary for a given set of sounds and for studying human aural identification and discrimination skills

    Word Spotting in a Multichannel Virtual Auditory Display at Normal and Accelerated Rates of Speech

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)The demands of concurrent radio communications in Navy shipboard command centers contribute to the problem of operator information overload and impede personnel optimization goals for new platforms. Motivations for serializing this task and human performance research with virtual, multichannel, rate-accelerated speech in support of this idea are briefly reviewed, and the results of a recent listening study in which participants carried out a Navyrelevant word-spotting task in this context are reported

    To What Extent do Listeners use Aural Information When it is Present

    Get PDF
    Presented at the 17th International Conference on Auditory Display (ICAD2011), 20-23 June, 2011 in Budapest, Hungary.In three of the manipulations in a 2009 dual-task performance study, virtual auditory cues were used to alert participants to the onset of three kinds of decision events in the secondary, but more demanding of the two tasks. Two aural parameters, the manner in which the cues were spatially presented and the level of task- related information they carried, were systematically altered to compare the impact of these factors on performance. In the work presented here, we focus on performance measures that can be cor- related with participants’ use of aurally-based task-related infor- mation in the study. Although secondary task decision response times were nominally the same in each manipulation, an analysis of head tracking data shows that, when they were cued, partici- pants turned their attention from the primary to the secondary task significantly sooner when a single sound (always the same) was used to announce decision events. In contrast, when a different sound was used to signal each kind of decision event, participants, after being cued, spent less time (but not significantly so) examin- ing the secondary task before entering their responses. The nature of this tradeoff and its implications for information design in audi- tory cueing is discussed

    Sonification of NRL Dual-Task Data

    Get PDF
    Presented at the 16th International Conference on Auditory Display (ICAD2010) on June 9-15, 2010 in Washington, DC

    Comprehension of Speech Presented at Synthetically Accelerated Rates: Evaluating Training and Practice Effects

    Get PDF
    Presented at the 16th International Conference on Auditory Display (ICAD2010) on June 9-15, 2010 in Washington, DC.The ability to monitor multiple sources of concurrent auditory information is an integral component of Navy watchstanding operations. However, this leads to attentionally demanding environments. The present study tested the utility of a potential solution to listening to multiple speech communications in an auditory display environment: presenting speech serially at synthetically accelerated rates. Comprehension performance of short auditory narratives was compared at seven accelerated speech rates. Practice effects and training effects were examined. An optimum acceleration rate for comprehension performance was determined, and training was found to be an effective method when synthetic speech was presented at slow to moderately accelerated rates

    Comprehending Synthetically Accelerated Speech: The Relationship Between Performance and Self-confidence

    Get PDF
    Presented at the 17th International Conference on Auditory Display (ICAD2011), 20-23 June, 2011 in Budapest, Hungary.The present study examines the ability to comprehend speech presented at synthetically accelerated rates in an auditory display environment. We sought to determine whether listeners could accurately predict their own performance when listening to accelerated speech. Comprehension performance and self-confidence ratings were compared at seven different rates of presentation. Self-confidence and accurate comprehension were related at slow to moderately accelerated rates of speech, however, listeners demonstrated an overconfidence effect at higher accelerated speech rates

    Evaluating the utility of auditory perspective-taking in robot speech presentations

    Get PDF
    Presented at the 15th International Conference on Auditory Display (ICAD2009), Copenhagen, Denmark, May 18-22, 2009In speech interactions, people routinely reason about each other’s auditory perspective and adjust their manner of speaking accordingly by raising their voice to overcome noise or distance, and sometimes by pausing and resuming when conditions are more favorable for their listener. In this paper we report the findings of a listening study motivated by both this observation and a prototype auditory interface for a mobile robot that monitors the aural parameters of its environment to infer its user’s listening requirements. The results provide significant empirical evidence of the utility of simulated auditory perspective taking and the inferred use of loudness and/or pauses to overcome the potential of ambient noise to mask synthetic speech

    Evaluating listeners’ attention to and comprehension of serialy interleaved, rate-accelerated speech

    Get PDF
    Presented at the 18th International Conference on Auditory Display (ICAD2012) on June 18-21, 2012 in Atlanta, Georgia.Reprinted by permission of the International Community for Auditory Display, http://www.icad.org.In Navy command operations, individual watchstanders must often concurrently monitor two or more channels of spoken communications at a time, which in turn can undermine information awareness and decision performance. Recent basic work on this operational challenge has shown that a virtual auditory display solution, in which competing messages are presented one at a time at faster rates of speech, can achieve large and significant improvements on diminished measures of listening performance observed in concurrent monitoring at normal speaking rates with equivalent materials. In the third of a series of experiments developed to address performance questions the parameters of this framework raise for listeners, dependent measures of attention and comprehension were compared in a two factor design that manipulated how serial turns among four talkers were organized and their rate of speech. Although both factors had significant impacts on performance, the resulting measures remained substantially higher than performance in concurrent talker conditions.ON
    corecore