9 research outputs found

    Flying by Ear: Blind Flight with a Music-Based Artificial Horizon

    Get PDF
    Two experiments were conducted in actual flight operations to evaluate an audio artificial horizon display that imposed aircraft attitude information on pilot-selected music. The first experiment examined a pilot's ability to identify, with vision obscured, a change in aircraft roll or pitch, with and without the audio artificial horizon display. The results suggest that the audio horizon display improves the accuracy of attitude identification overall, but differentially affects response time across conditions. In the second experiment, subject pilots performed recoveries from displaced aircraft attitudes using either standard visual instruments, or, with vision obscured, the audio artificial horizon display. The results suggest that subjects were able to maneuver the aircraft to within its safety envelope. Overall, pilots were able to benefit from the display, suggesting that such a display could help to improve overall safety in general aviation

    In Ear to Out There: A Magnitude Based Parameterization Scheme for Sound Source Externalization

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)While several potential auditory cues responsible for sound source externalization have been identified, less work has gone into providing a simple and robust way of manipulating perceived externalization. The current work describes a simple approach for parametrically modifying individualized head-related transfer function spectra that results in a systematic change in the perceived externalization of a sound source. Methods and results from a subjective evaluation validating the technique are presented, and further discussion relates the current method to previously identified cues for auditory distance perception

    Virtual-Audio Aided Visual Search on a Desktop Display

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)As visual display complexity grows, visual cues and alerts may become less salient and therefore less effective. Although the auditory system's resolution is rather coarse relative to the visual system, there is some evidence for virtual spatialized audio to benefit visual search on a small frontal region, such as a desktop monitor. Two experiments examined if search times could be reduced compared to visual-only search through spatial auditory cues rendered using one of two methods: individualized or generic head-related transfer functions. Results showed the cue type interacted with display complexity, with larger reductions compared to visual-only search as set size increased. For larger set sizes, individualized cues were significantly better than generic cues overall. Across all set sizes, individualized cues were better than generic cues for cueing eccentric elevations (> Ā±8Ā°). Where performance must be maximized, designers should use individualized virtual audio if at all possible, even in small frontal region within the field of view

    Do you hear where I hear?: Isolating the individualized sound localization cues.

    Get PDF
    It is widely acknowledged that individualized head-related transfer function (HRTF) measurements are needed to adequately capture all of the 3D spatial hearing cues. However, many perceptual studies have shown that localization accuracy in the lateral dimension is only minimally decreased by the use of non-individualized head-related transfer functions. This evidence supports the idea that the individualized components of an HRTF could be isolated from those that are more general in nature. In the present study we decomposed the HRTF at each location into average, lateral and intraconic spectral components, along with an ITD in an effort to isolate the sound localization cues that are responsible for the inter-individual differences in localization performance. HRTFs for a given listener were then reconstructed systematically with components that were both individualized and non-individualized in nature, and the effect of each modification was analyzed via a virtual localization test where brief 250-ms noise bursts were rendered with the modified HRTFs. Results indicate that the cues important for individualization of HRTFs are contained almost exclusively in the intraconic portion of the HRTF spectra and localization is only minimally affected by introducing non-individualized cues into the other HRTF components. These results provide new insights into what specific inter-individual differences in head-related acoustical features are most relevant to sound localization, and provide a framework for how future human-machine interfaces might be more effectively generalized and/or individualized

    Evaluating Spatial-Auditory Symbology for Improved Performance in Low-Fidelity Spatial Audio Displays

    Get PDF
    For decades, spatial auditory displays have been considered to be a promising technology to help fight pilot disorientation and loss of SA. Inherently heads-up, these displays can provide time-critical spatial information to pilots about navigational targets, air and runway traffic, wingman location, and even the attitude of oneā€™s aircraft without placing additional demands on the already over-tasked visual system. Unfortunately, currently-fielded auditory displays often suffer from poor spatial fidelity, particularly in elevation, due to their use of a one-size-fits-all (i.e., non-personalized) head-related transfer function (HRTF), the set offilters responsible for creating the spatial impression. The current study investigated the utility of combining a spatial cue (non-personalized HRTF) with one of two auditory symbologies, one providing both object and location information, and the other only location information. In one case, ecologically-valid sounds were paired with a particular class of visual object, and spatial cues indicated a plausible target elevation (e.g., a squeak indicated the target was a rat on the floor). In the other condition, the cue was a broadband sound, the repetition rate of which indicated target elevation (i.e., the cue provided only location information, not object information). Results indicate that target acquisition times were lower when meaningful (i.e., ecologically-valid) cues were added to non-personalized spatial cues when compared to the case in which the source-based cues provided no information about the target source. These results indicate that careful construction of auditory symbology could improve performance of cockpit-based spatial auditory displays when personalized, high-fidelity spatial processing is not practical

    Searching for the model of common ground in human-computer dialogue

    No full text
    Natural language dialogue is a desirable method for human-robot interaction and human-computer interaction. Critical to the success of dialogue is the underlying model for common ground and the grounding process that establishes, adds to, and repairs shared understanding. The model of grounding for human-computer interaction should be informed by human-human dialogue. However, the processes involved in human-human grounding are under dispute within the research community. Three models have been proposed: alignment, a simple model that has been influential on dialogue system development, interpersonal synergy, an automatic coordination emerging from interaction, and perspective taking, a strategic interaction based on intentional coordination. Few studies have simultaneously evaluated these models. We tested the modelsā€™ ability to account for human-human performance in a complex collaborative task that stressed the grounding process. The results supported the perspective taking model over the synergy model and the alignment model, indicating the need to reassess the alignment model as a foundation for human-computer interaction

    A Comparison of Head-Tracked and Vehicle-Tracked Virtual Audio Cues in an Aircraft Navigation Taask

    Get PDF
    Since the earliest conception of virtual audio displays in the 1980's, two basic principles that have guided their development have been 1) that virtual audio cues are ideal for providing information to pilots in aviation applications; and 2) that head-tracked virtual audio displays provide more accurate and more intuitive directional information than non-tracked displays. However, despite the obvious potential utility of spatial audio cues in the cockpit, very little quantitative data has been collected to evaluate the in-flight performance of pilots using virtual audio displays. In this study, sixteen pilots maneuvered a general aviation aircraft through a series of ten waypoints using only direction cues provided a virtual audio display system. Each pilot repeated the task twice: once with a virtual display slaved to the direction of the pilot's head, and once with a virtual audio display slaved to the direction of the aircraft. Both configurations provided audio cues that were sufficient for successful aircraft navigation, with pilots on average piloting their aircraft to within 0.25 miles of the desired waypoints. However performance was significantly better in the plane-slaved condition, primarily due to a leftward bias in the head-slaved flight paths. This result suggests how important frame of reference considerations can be in the design of virtual audio displays for vehicle navigation
    corecore