133 research outputs found

    Auditory Decision Aiding in Supervisory Control of Multiple Unmanned Aerial Vehicles

    Get PDF
    This paper investigates the effectiveness of sonification, continuous auditory alert mapped to the state of a monitored task, in supporting unmanned aerial vehicle (UAV) supervisory control. Background: UAV supervisory control requires monitoring each UAV across multiple tasks (e.g., course maintenance) via a predominantly visual display, which currently is supported with discrete auditory alerts. Sonification has been shown to enhance monitoring performance in domains such as anesthesiology by allowing an operator to immediately determine an entity's (e.g., patient) current and projected states, and is a promising alternative to discrete alerts in UAV control. However, minimal research compares sonification to discrete alerts, and no research assesses the effectiveness of sonification for monitoring multiple entities (e.g., multiple UAVs). Method: An experiment was conducted with 39 military personnel, using a simulated setup. Participants controlled single and multiple UAVs, and received sonifications or discrete alerts based on UAV course deviations and late target arrivals. Results: Regardless of the number of UAVs supervised, the course deviation sonification resulted in 1.9 s faster reactions to course deviations, a 19% enhancement from discrete alerts. However, course deviation sonification interfered with the effectiveness of discrete late arrival alerts in general, and with operator response to late arrivals when supervising multiple vehicles. Conclusions: Sonifications can outperform discrete alerts when designed to aid operators to predict future states of monitored tasks. However, sonifications may mask other auditory alerts, and interfere with other monitoring tasks that require divided attention.US Army through a Small Business Innovation Research led by Charles River Analytics, Inc

    Visual Analytics: Computational AND Representational Data Processing to Support Analytic Rigor

    Get PDF

    Examining the learnability of auditory displays: Music, earcons, spearcons, and lyricons

    Get PDF
    Auditory displays are a useful platform to convey information to users for a variety of reasons. The present study sought to examine the use of different types of sounds that can be used in auditory displays—music, earcons, spearcons, and lyricons—to determine which sounds have the highest learnability when presented in sequences. Participants were self-trained on sound meanings and then asked to recall meanings after listening to sequences of varying lengths. The relatedness of sounds and their attributed meanings, or the intuitiveness of the sounds, was also examined. The results show that participants were able to learn and recall lyricons and spearcons the best, and related meaning is an important contributing variable to learnability and memorability of all sound types. This should open the door for future research and experimentation of lyricons and spearcons presented in auditory streams

    The Effects of Design on Performance for Data-based and Task-based Sonification Designs: Evaluation of a Task-based Approach to Sonification Design for Surface Electromyography

    Get PDF
    The goal of this work was to evaluate a task-analysis-based approach to sonification design for surface electromyography (sEMG) data. A sonification is a type of auditory display that uses sound to convey information about data to a listener. Sonifications work by mapping changes in a parameter of sound (e.g., pitch) to changes in data values and they have been shown to be useful in biofeedback and movement analysis applications. However, research that investigates and evaluates sonifications has been difficult due to the highly interdisciplinary nature of the field. Progress has been made but to date, many sonification designs have not been empirically evaluated and have been described as annoying, confusing, or fatiguing. Sonification design decisions have also often been based on characteristics of the data being sonified, and not on the listener’s data analysis task. The hypothesis for this thesis was that focusing on the listener’s task when designing sonifications could result in sonifications that were more readily understood and less annoying to listen to. Task analysis methods have been developed in fields like Human Factors and Human Computer Interaction, and their purpose is to break tasks down into their most basic elements so that products and software can be developed to meet user needs. Applying this approach to sonification design, a type of task analysis focused on Goals, Operators, Methods, and Selection rules (GOMS) was used to analyze two sEMG data evaluation tasks, identify design criteria that a sonification would need to meet in order to allow a listener to perform these two tasks, and two sonification designs were created to facilitate accomplishment of these tasks. These two Task-based sonification designs were then empirically compared to two Data-based sonification designs. The Task-based designs resulted in better listener performance for both sEMG data evaluation tasks, demonstrating the effectiveness of the Task-based approach and suggesting that sonification designers may benefit from adopting a task-based approach to sonification design

    Using a sequence of earcons to monitor multiple simulated patients

    Get PDF
    Objective: The aim of this study was to determine whether a sequence of earcons can effectively convey the status of multiple processes, such as the status of multiple patients in a clinical setting. Background: Clinicians often monitor multiple patients. An auditory display that intermittently conveys the status of multiple patients may help. Method: Nonclinician participants listened to sequences of 500-ms earcons that each represented the heart rate (HR) and oxygen saturation (SpO2) levels of a different simulated patient. In each sequence, one, two, or three patients had an abnormal level of HR and/or SpO2. In Experiment 1, participants reported which of nine patients in a sequence were abnormal. In Experiment 2, participants identified the vital signs of one, two, or three abnormal patients in sequences of one, five, or nine patients, where the interstimulus interval (ISI) between earcons was 150 ms. Experiment 3 used the five-sequence condition of Experiment 2, but the ISI was either 150 ms or 800 ms. Results: Participants reported which patient(s) were abnormal with median 95% accuracy. Identification accuracy for vital signs decreased as the number of abnormal patients increased from one to three, p < .001, but accuracy was unaffected by number of patients in a sequence. Overall, identification accuracy was significantly higher with an ISI of 800 ms (89%) compared with an ISI of 150 ms (83%), p < .001. Conclusion: A multiple-patient display can be created by cycling through earcons that represent individual patients. Application: The principles underlying the multiple-patient display can be extended to other vital signs, designs, and domains

    The Bird's Ear View: Audification for the Spectral Analysis of Heliospheric Time Series Data.

    Full text link
    The sciences are inundated with a tremendous volume of data, and the analysis of rapidly expanding data archives presents a persistent challenge. Previous research in the field of data sonification suggests that auditory display may serve a valuable function in the analysis of complex data sets. This dissertation uses the heliospheric sciences as a case study to empirically evaluate the use of audification (a specific form of sonification) for the spectral analysis of large time series. Three primary research questions guide this investigation, the first of which addresses the comparative capabilities of auditory and visual analysis methods in applied analysis tasks. A number of controlled within-subject studies revealed a strong correlation between auditory and visual observations, and demonstrated that auditory analysis provided a heightened sensitivity and accuracy in the detection of spectral features. The second research question addresses the capability of audification methods to reveal features that may be overlooked through visual analysis of spectrograms. A number of open-ended analysis tasks quantitatively demonstrated that participants using audification regularly discovered a greater percentage of embedded phenomena such as low-frequency wave storms. In addition, four case studies document collaborative research initiatives in which audification contributed to the acquisition of new domain-specific knowledge. The final question explores the potential benefits of audification when introduced into the workflow of a research scientist. A case study is presented in which a heliophysicist incorporated audification into their working practice, and the “Think-Aloud” protocol is applied to gain a sense for how audification augmented the researcher’s analytical abilities. Auditory observations are demonstrated to make significant contributions to ongoing research, including the detection of previously unidentified equipment-induced artifacts. This dissertation provides three primary contributions to the field: 1) an increased understanding of the comparative capabilities of auditory and visual analysis methods, 2) a methodological framework for conducting audification that may be transferred across scientific domains, and 3) a set of well-documented cases in which audification was applied to extract new knowledge from existing data archives. Collectively, this work presents a “bird’s ear view” afforded by audification methods—a macro understanding of time series data that preserves micro-level detail.PhDDesign ScienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111561/1/rlalexan_1.pd

    Assessing the Impact of Auditory Peripheral Displays for UAV Operators

    Get PDF
    A future implementation of unmanned aerial vehicle (UAV) operations is having a single operator control multiple UAVs. The research presented here explores possible avenues of enhancing audio cues of UAV interfaces for this futuristic control of multiple UAVs by a single operator. This project specifically evaluates the value of continuous and discrete audio cues as indicators of course deviations or late arrivals to targets for UAV missions. It also looks at the value of the audio cues in single and multiple UAV scenarios. To this end, an experiment was carried out on the Multiple Autonomous Unmanned Vehicle Experimental (MAUVE) test bed developed in the Humans and Automation Laboratory at the Massachusetts Institute of Technology with 44 military participants. Specifically, two continuous audio alerts were mapped to two human supervisory tasks within MAUVE. One of the continuous audio alerts, an oscillating course deviation alert was mapped to UAV course deviations which occurred over a continual scale. The other continuous audio alert tested was a modulated late arrival alert which alerted the operator when a UAV was going to be late to a target. In this case the continuous audio was mapped to a discrete event in that the UAV was either on time or late to a target. The audio was continuous in that it was continually on and alerting the participant to the current state of the UAV. It either was playing a tone indicating the UAV was on time to a target or playing a tone indicating the UAV was late to a target. These continuous alerts were tested against more traditional single beep alerts which acted as discrete alerts. The beeps were discrete in that when they were used for monitoring course deviations a single beep was played when the UAV got to specific threshold off of the course or for late arrivals a single beep was played when the UAV became late. The results show that the use of the continuous audio alerts enhances a single operator’s performance in monitoring single and multiple semi-autonomous vehicles. However, the results also emphasize the necessity to properly integrate the continuous audio with the other auditory alarms and visual representations in a display, as it is possible for discrete audio alerts to be lost in aural saliency of continuous audio, leaving operators reliant on the visual aspects of the display.Prepared for Charles River Analytics, Inc

    Auditory Alarm Design for NASA CEV Applications

    Get PDF
    This monograph reviews current knowledge in the design of auditory caution and warning signals, and sets criteria for development of 'best practices' for designing new signals for NASA's Crew Exploration Vehicle (CEV) and other future spacecraft, as well as for extra-vehicular operations. A design approach is presented that is based upon cross-disciplinary examination of psychoacoustic research, human factors experience, aerospace practices, and acoustical engineering requirements. Existing alarms currently in use with the NASA Space Shuttle flight deck are analyzed and then alternative designs are proposed that are compliant with ISO 7731, ``Danger signals for work places – Auditory Danger Signals'', and that correspond to suggested methods in the literature to insure discrimination and audibility. Future development of auditory ``sonification'' techniques into the design of alarms will allow auditory signals to be extremely subtle, yet extremely useful for indicating trends or root causes of failures. A summary of `best practice' engineering guidelines is given, followed by results of an experiment involving subjective classification of alarms by ten subjects

    Informative Vibrotactile Displays to Support Attention and Task Management in Anesthesiology.

    Full text link
    The task set of an anesthesiologist, like that of operators in many complex, data-rich domains, requires effective management of attention, which must be divided among multiple tasks and task-relevant data sources. The inefficient allocation of attentional resources can lead to errors in monitoring a patient’s physiology, which constitute a significant portion of preventable medical errors. To better support attention management and multitasking performance without additionally loading the visual or auditory channels, this dissertation describes work to develop novel “continuously-informing” vibrotactile displays of physiological data. These displays use coded vibration patterns to communicate blood pressure and respiration data in real time. A theory-based approach was taken in the design of these displays to support the properties of “preattentive reference”: the signals can be processed in parallel without interfering with ongoing tasks, include partial information to support efficient task-switching, and can be processed in a mentally economical way. A series of research activities identified: 1) types of information that could best support anesthesiologists in task management decisions; 2) how to display this information via vibrotactile signals in ways that minimize perceptual interference from effects such as vibrotactile adaptation, masking, and tactile “change blindness”; 3) how to encode the information in vibrotactile patterns to minimize interference with concurrent tasks at cognitive processing stages; and 4) mappings between signal modulations and the represented data that best support economical processing. An evaluation study, set in a high-fidelity clinical simulation, showed substantial improvements in anesthesiologists’ multitasking performance, including faster detection and correction of serious health events, and fewer unnecessary interruptions of ongoing tasks with continuously-informing tactile displays, when compared to performance with traditional (visual/auditory) display configurations. This work contributes to theories and models of tactile and multimodal information processing, specifically concerning the performance effects of perceptual and cognitive interferences when information is processed via two or more sensory channels concurrently. It also demonstrates how a vibrotactile display designed to support properties of preattentive reference can improve attention management and multitask performance, thus showing promise for reducing the prevalence of monitoring errors and system awareness issues in anesthesiology and other complex, data rich domains.Ph.D.Industrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78911/1/ferrist_1.pd
    • …
    corecore