2,839 research outputs found

    Peripheral Notifications: Effects of Feature Combination and Task Interference

    Get PDF
    Visual notifications are integral to interactive computing systems. The design of visual notifications entails two main considerations: first, visual notifications should be noticeable, as they usually aim to attract a user`s attention to a location away from their main task; second, their noticeability has to be moderated to prevent user distraction and annoyance. Although notifications have been around for a long time on standard desktop environments, new computing environments such as large screens add new factors that have to be taken into account when designing notifications. With large displays, much of the content is in the user's visual periphery, where human capacity to notice visual effects is diminished. One design strategy for enhancing noticeability is to combine visual features, such as motion and colour. Yet little is known about how feature combinations affect noticeability across the visual field, or about how peripheral noticeability changes when a user is working on an attention-demanding task. We addressed these questions by conducting two studies. We conducted a laboratory study that tested people's ability to detect popout targets that used combinations of three visual variables. After determining that the noticeability of feature combinations were approximately equal to the better of the individual features, we designed an experiment to investigate peripheral noticeability and distraction when a user is focusing on a primary task. Our results suggest that there can be interference between the demands of primary tasks and the visual features in the notifications. Furthermore, primary task performance is adversely affected by motion effects in the peripheral notifications. Our studies contribute to a better understanding of how visual features operate when used as peripheral notifications. We provide new insights, both in terms of combining features, and interactions with primary tasks

    Spatial audio in small display screen devices

    Get PDF
    Our work addresses the problem of (visual) clutter in mobile device interfaces. The solution we propose involves the translation of technique-from the graphical to the audio domain-for expliting space in information representation. This article presents an illustrative example in the form of a spatialisedaudio progress bar. In usability tests, participants performed background monitoring tasks significantly more accurately using this spatialised audio (a compared with a conventional visual) progress bar. Moreover, their performance in a simultaneously running, visually demanding foreground task was significantly improved in the eye-free monitoring condition. These results have important implications for the design of multi-tasking interfaces for mobile devices

    Enhancing user experience and safety in the context of automated driving through uncertainty communication

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic of overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. Previous research has indicated that conveying system uncertainties may alleviate these issues. However, existing approaches require drivers to attend the uncertainty information with focal attention, likely resulting in missed changes when engaged in non-driving-related tasks. This research project expands on existing work regarding uncertainty communication in the context of automated driving. Specifically, it aims to investigate the implications of conveying uncertainties under consideration of non-driving-related tasks and, based on the outcomes, develop and evaluate an uncertainty display that enhances both user experience and driving safety. In a first step, the impact of visually conveying uncertainties was investigated under consideration of workload, trust, monitoring behaviour, non-driving-related tasks, takeover performance, and situation awareness. For this, an anthropomorphic visual uncertainty display located in the instrument cluster was developed. While the hypothesised benefits for trust calibration and situation awareness were confirmed, the results indicate that visually conveying uncertainties leads to an increased perceived effort due to a higher frequency of monitoring glances. Building on these findings, peripheral awareness displays were explored as a means for conveying uncertainties without the need for focused attention to reduce monitoring glances. As a prerequisite for developing such a display, a systematic literature review was conducted to identify evaluation methods and criteria, which were then coerced into a comprehensive framework. Grounded in this framework, a peripheral awareness display for uncertainty communication was developed and subsequently compared with the initially proposed visual anthropomorphic uncertainty display in a driving simulator study. Eye tracking and subjective workload data indicate that the peripheral awareness display reduces the monitoring effort relative to the visual display, while driving performance and trust data highlight that the benefits of uncertainty communication are maintained. Further, this research project addresses the implications of increasing the functional detail of uncertainty information. Results of a driving simulator study indicate that particularly workload should be considered when increasing the functional detail of uncertainty information. Expanding upon this approach, an augmented reality display concept was developed and a set of visual variables was explored in a forced choice sorting task to assess their ordinal characteristics. Particularly changes in colour hue and animation-based variables received high preference ratings and were ordered consistently from low to high uncertainty. This research project has contributed a series of novel insights and ideas to the field of human factors in automated driving. It confirmed that conveying uncertainties improves trust calibration and situation awareness, but highlighted that using a visual display lessens the positive effects. Addressing this shortcoming, a peripheral awareness display was designed applying a dedicated evaluation framework. Compared with the previously employed visual display, it decreased monitoring glances and, consequentially, perceived effort. Further, an augmented reality-based uncertainty display concept was developed to minimise the workload increments associated with increases in the functional detail of uncertainty information.</div

    Basic and Applied Studies of Human Visual Function: Implications for Visually Demanding Occupations

    Get PDF
    Color vision is a complex process providing important information about objects within our environment. Color vision deficiency either congenital or acquired can impact real world performance. Current working environments either require normal color vision or utilize color as a tool to highlight critical information. The use of color in the workplace provides several advantages. Hence, color vision screening is required for entry into professions and occupational certifications. Acquired color vision deficiency may also impact job performance requiring clinical screening. The present dissertation focused on the considerations outlined by the Commission on Behavior and Social Sciences when choosing a clinical test for occupational purposes. In order to address these considerations, I conducted a series of four studies. The first study compared and contrasted three different computerized color vision tests for contrast sensitivity and analyzed how the minimum cutoff score differed between the tests. The results indicated that while log CS values were similar, there were enough differences between the values that caution should be applied when using the tests interchangeably for occupational screening. The second study assessed the Color Vision Field Test and found that it has excellent sensitivity and specificity for occupational screening when appropriate protocols are followed. The third study determined if the Cone Contrast Test could predict performance on the FM-100 Hue thereby providing a potential alternative test to the FM-100. Results indicated the CCT may be an effective substitute for the FM-100 to provide certification of jewelry appraisers, but the small sample size warrants additional comparative validation to support sole utilization of the CCT. This study also revealed exceptional hue discrimination in jewelry appraisers, a possible effect of perceptual learning. The last study expanded previous research on cell phone distraction to auditory distraction with a navigational system. Delayed response time was found which poses a threat to safety

    Auditory cues for attention management

    Get PDF
    An exhaustible supply of mental resources necessitate that we are selective for what we attend to. Attention prioritizes what ought to be processed and what ignored, allocating valuable resources to selected information at the cost of unattended information elsewhere. For this purpose it is necessary to know the conditions that help the brain decide when attention should be paid, where to and to what information. The question that is central to this dissertation is how auditory cues can support the management of limited attentional resources based on auditory characteristics. Auditory cues can (1) increase the overall alertness, (2) orient attention to unattended information, or (3) manage attentional resources by informing of an upcoming task-switch and, therefore, indicate when to pay attention to which task. The first study of this dissertation investigated whether different population groups might process auditory cues differently, thus resulting in different levels of alertness (1). Study two examined more specifically whether the type of auditory cue (verbal command or auditory icon) used as in-vehicle notifications can influence the level of alertness (1). Studies three and four investigated the use of a special auditory cue characteristic, the looming intensity profile, for directing attention to regions of interest (2). Here, attention orienting to peripheral events was tested within a dual-task paradigm which required attention shifts between the two tasks (3). Throughout the studies, I show that electroencephalography (EEG) is an indispensable tool for evaluating auditory cues and their influence on crossmodal attention. By using EEG measurements, I was able to demonstrate that auditory cues evoked the same level of alertness across different populations and that differences in behavioral responses are not due to subjective differences of cue processing (Chapter 2). More importantly, I was able to show that verbal commands and auditory cues can be functionally discriminated by the brain. While both sounds are alerting they ought to be used complementary, depending on the intended goal (Chapter 3). The studies that employed the looming sound to redirect spatial attention to an unattended visual target showed a robust benefit in response times at longer cue-target intervals (Chapter 4 and 5). The looming benefit in processing visual targets is also apparent as enhanced neural activity in the right posterior hemisphere 280ms after target onset. Source-estimation results suggest that a preferential activation of frontal and parietal areas, which are involved in attention orienting, give rise to this looming benefit (Chapter 5). Finally, auditory cues improved performance for unattended targets but might also benefit the central visuo-motor task by only directing attention to the periphery without moving the eyes away from the visuo-motor task. This demonstrates that auditory cues also help in managing attention by preparing for task switches such that covert attention is allocated to the respective task when this task has to be performed. Overall this dissertation demonstrates that the careful selection of auditory cues can go a long way in supporting attention management

    Chameleon Devices: Investigating More Secure and Discreet Mobile Interactions via Active Camouflaging

    Get PDF
    Many users value the ability to have quick and frequent sight of their mobiles when in public settings. However, in doing so, they expose themselves to potential risks, ranging from being targets of robbery to the more subtle social losses through being seen to be rude or inattentive to those around them. In nature, some animals can blend into their environments to avoid being eaten or to reduce their impact on the ecosystem around them. Taking inspiration from these evolved systems we investigate the notion of chameleon approaches for mobile interaction design. Our probes were motivated, inspired and refined through extended interactions with people drawn from contexts with differing ranges of security and privacy concerns. Through deployments on users’ own devices, our prototypes show the value of the concept. The encouraging results motivate further research in materials and form factors that can provide more effective automatic plain-sight hiding

    Age-Related Differences in Multimodal Information Processing and Their Implications for Adaptive Display Design.

    Full text link
    In many data-rich, safety-critical environments, such as driving and aviation, multimodal displays (i.e., displays that present information in visual, auditory, and tactile form) are employed to support operators in dividing their attention across numerous tasks and sources of information. However, limitations of this approach are not well understood. Specifically, most research on the effectiveness of multimodal interfaces has examined the processing of only two concurrent signals in different modalities, primarily in vision and hearing. Also, nearly all studies to date have involved young participants only. The goals of this dissertation were therefore to (1) determine the extent to which people can notice and process three unrelated concurrent signals in vision, hearing and touch, (2) examine how well aging modulates this ability, and (3) develop countermeasures to overcome observed performance limitations. Adults aged 65+ years were of particular interest because they represent the fastest growing segment of the U.S. population, are known to suffer from various declines in sensory abilities, and experience difficulties with divided attention. Response times and incorrect response rates to singles, pairs, and triplets of visual, auditory, and tactile stimuli were significantly higher for older adults, compared to younger participants. In particular, elderly participants often failed to notice the tactile signal when all three cues were combined. They also frequently falsely reported the presence of a visual cue when presented with a combination of auditory and tactile cues. These performance breakdowns were observed both in the absence and presence of a concurrent visual/manual (driving) task. Also, performance on the driving task suffered the most for older adult participants and with the combined visual-auditory-tactile stimulation. Introducing a half-second delay between two stimuli significantly increased response accuracy for older adults. This work adds to the knowledge base in multimodal information processing, the perceptual and attentional abilities and limitations of the elderly, and adaptive display design. From an applied perspective, these results can inform the design of multimodal displays and enable aging drivers to cope with increasingly data-rich in-vehicle technologies. The findings are expected to generalize and thus contribute to improved overall public safety in a wide range of complex environments.PhDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133203/1/bjpitts_1.pd

    Subtle, intimate interfaces for mobile human computer interaction

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Includes bibliographical references (p. 113-122).The mobile phone is always carried with the user and is always active: it is a very personal device. It fosters and satisfies a need to be constantly connected to one's significant other, friends or business partners. At the same time, mobile devices are often used in public, where one is surrounded by others not involved in the interaction. This private interaction in public is often a cause of unnecessary disruption and distraction, both for the bystanders and even for the user. Nevertheless, mobile devices do fulfill an important function, informing of important events and urgent communications, so turning them off is often not practical nor possible. This thesis introduces Intimate Interfaces: discreet interfaces that allow subtle private interaction with mobile devices in order to minimize disruption in public and gain social acceptance. Intimate Interfaces are inconspicuous to those around the users, while still allowing them to communicate. The concept is demonstrated through the design, implementation and evaluation of two novel devices: * Intimate Communication Armband - a wearable device, embedded in an armband, that detects motionless gestures through electromyographic (EMG) sensing for subtle input and provides tactile output;(cont.) * Notifying Glasses - a wearable notification display embedded in eyeglasses; it delivers subtle cues to the peripheral field of view of the wearer, while being invisible to others. The cues can convey a few bits of information and can be designed to meet specific levels of visibility and disruption. Experimental results show that both interfaces can be reliably used for subtle input and output. Therefore, Intimate Interfaces can be profitably used to improve mobile human-computer interaction.by Enrico Costanza.S.M
    • …
    corecore