102 research outputs found

    Using a sequence of earcons to monitor multiple simulated patients

    Get PDF
    Objective: The aim of this study was to determine whether a sequence of earcons can effectively convey the status of multiple processes, such as the status of multiple patients in a clinical setting. Background: Clinicians often monitor multiple patients. An auditory display that intermittently conveys the status of multiple patients may help. Method: Nonclinician participants listened to sequences of 500-ms earcons that each represented the heart rate (HR) and oxygen saturation (SpO2) levels of a different simulated patient. In each sequence, one, two, or three patients had an abnormal level of HR and/or SpO2. In Experiment 1, participants reported which of nine patients in a sequence were abnormal. In Experiment 2, participants identified the vital signs of one, two, or three abnormal patients in sequences of one, five, or nine patients, where the interstimulus interval (ISI) between earcons was 150 ms. Experiment 3 used the five-sequence condition of Experiment 2, but the ISI was either 150 ms or 800 ms. Results: Participants reported which patient(s) were abnormal with median 95% accuracy. Identification accuracy for vital signs decreased as the number of abnormal patients increased from one to three, p < .001, but accuracy was unaffected by number of patients in a sequence. Overall, identification accuracy was significantly higher with an ISI of 800 ms (89%) compared with an ISI of 150 ms (83%), p < .001. Conclusion: A multiple-patient display can be created by cycling through earcons that represent individual patients. Application: The principles underlying the multiple-patient display can be extended to other vital signs, designs, and domains

    Evaluation of preview cues to enhance recall of auditory sequential information

    Full text link
    Background: In previous work, an auditory vital sign display of five patients was developed. Sounds denoting the vital signs of each patient were delivered in order, with a special sound for any patient whose vital signs were all normal. Although the display was effective, accuracy decreased as the number of abnormal patients increased. We wondered whether accuracy would improve with a preview sound indicating the number of patients with abnormal vital signs in the upcoming sequence by reducing working memory load. We also wondered whether the preview sound would affect the performance of responding to concurrent task. Methods: A 3 (preview cue type) x 4 (number of abnormal patients) mixed-factorial design was adopted. Preview cue type (between-subjects) was either time-compressed speech or an abstract sound containing white noise pulses to indicate the upcoming number of abnormal patients, or no preview cue. The number of abnormal patients (within-subjects) was zero, one, two, or three. Results: Preview cue did not improve non-clinician participantsā€™ability to identify the location in the sequence or the vital signs of patients with abnormal vital signs. Response accuracy dropped as the number of patients with abnormal vital signs increased. The preview cue types did not affect the accuracy of responding to the concurrent task, However, the users tended to ignore the concurrent task when preview cue created by abstract sound with white noise pulses was used. Conclusion: The current preview cue did not improve or hurt the performance of identifying abnormal patientsā€™locations and vital signs. However, it would degrade the concurrent task performance. Therefore, the current design of preview cue can be eliminated in future auditory display design

    Instructional eLearning technologies for the vision impaired

    Get PDF
    The principal sensory modality employed in learning is vision, and that not only increases the difficulty for vision impaired students from accessing existing educational media but also the new and mostly visiocentric learning materials being offered through on-line delivery mechanisms. Using as a reference Certified Cisco Network Associate (CCNA) and IT Essentials courses, a study has been made of tools that can access such on-line systems and transcribe the materials into a form suitable for vision impaired learning. Modalities employed included haptic, tactile, audio and descriptive text. How such a multi-modal approach can achieve equivalent success for the vision impaired is demonstrated. However, the study also shows the limits of the current understanding of human perception, especially with respect to comprehending two and three dimensional objects and spaces when there is no recourse to vision

    Understanding and Enhancing Customer-Agent-Computer Interaction in Customer Service Settings

    Get PDF
    Providing good customer service is crucial to many commercial organizations. There are different means through which the service can be provided, such as Ecommerce, call centres or face-to-face. Although some service is provided through electronic or telephone-based interaction, it is common that the service is provided through human agents. In addition, many customer service interactions also involve a computer, for example, an information system where a travel agent finds suitable flights. This thesis seeks to understand the three channels of customer service interactions between the agent, customer and computer: Customer-Agent-Computer Interaction (CACI). A set of ethnographic studies were conducted at call centres to gain an initial understanding of CACI and to investigate the customer-computer channel. The findings revealed that CACI is more complicated than traditional CHI, because there is a second person, the customer, involved in the interaction. For example, the agent provides a lot of feedback about the computer to the customer, such as, ā€œI am waiting for the computerā€ Laboratory experiments were conducted to investigate the customer-computer channel by adding non-verbal auditory feedback about the computer directly to the customers. The findings showed only a small insignificant difference in task completion time and subjective satisfaction. There were indications that there was an improvement in flow of communication. Experiments were conducted to investigate how the two humans interact over two different communication modes: face-to-face and telephone. Findings showed that there was a significantly shorter task completion time via telephone. There was also a difference in style of communication, with face-to-face having more single activities, such as, talking only, while in the telephone condition there were more dual activities, for instance talking while also searching. There was only a small difference in subjective satisfaction. To investigate if the findings from the laboratory experiment also held in a real situation and to identify potential improvement areas, a series of studies were conducted: observations and interviews at multiple travel agencies, one focus group and a proof of concept study at one travel agency. The findings confirmed the results from the laboratory experiments. A number of potential interface improvements were also identified, such as, a history mechanism and sharing part of the computer screen with the customer at the agent's discretion. The results from the work in this thesis suggest that telephone interaction, although containing fewer cues, is not necessarily an impoverished mode of communication. Telephone interaction is less time consuming and more task-focused. Further, adding non-verbal auditory feedback did not enhance the interaction. The findings also suggest that customer service CACI is inherently different in nature and that there are additional complications with traditional CHI issues

    From signal to substance and back: Insights from environmental sound research to auditory display design

    Get PDF
    Presented at the 15th International Conference on Auditory Display (ICAD2009), Copenhagen, Denmark, May 18-22, 2009A persistent concern in the field of auditory display design has been how to effectively use environmental sounds, which are naturally occurring familiar non-speech, non-musical sounds. Environmental sounds represent physical events in the everyday world, and thus they have a semantic content that enables learning and recognition. However, unless used appropriately, their functions in auditory displays may cause problems. One of the main considerations in using environmental sounds as auditory icons is how to ensure the identifiability of the sound sources. The identifiability of an auditory icon depends on both the intrinsic acoustic properties of the sound it represents, and on the semantic fit of the sound to its context, i.e., whether the context is one in which the sound naturally occurs or would be unlikely to occur. Relatively recent research has yielded some insights into both of these factors. A second major consideration is how to use the source properties to represent events in the auditory display. This entails parameterizing the environmental sounds so the acoustics will both relate to source properties familiar to the user and convey meaningful new information to the user. Finally, particular considerations come into play when designing auditory displays for special populations, such as hearing impaired listeners who may not have access to all the acoustic information available to a normal hearing listener, or to elderly or other individuals whose cognitive resources may be diminished. Some guidelines for designing displays for these populations will be outlined
    • ā€¦
    corecore