233,359 research outputs found
Factors influencing visual attention switch in multi-display user interfaces: a survey
Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.Postprin
Pilots’ visual scan pattern and situation awareness in flight operations
Introduction: Situation awareness (SA) is considered an essential prerequisite for safe flying. If the impact of visual scanning patterns on a pilot’s situation awareness could be identified in flight operations, then eye-tracking tools could be integrated with flight simulators to improve training efficiency. Method: Participating in this research were 18 qualified, mission-ready fighter pilots. The equipment included high-fidelity and fixed-base type flight simulators and mobile head-mounted eye-tracking devices to record a subject’s eye movements and SA while performing air-to-surface tasks. Results: There were significant differences in pilots’ percentage of fixation in three operating phases: preparation (M = 46.09, SD = 14.79), aiming (M = 24.24, SD = 11.03), and release and break-away (M = 33.98, SD = 14.46). Also, there were significant differences in pilots’ pupil sizes, which were largest in the aiming phase (M = 27,621, SD = 6390.8), followed by release and break-away (M = 27,173, SD = 5830.46), then preparation (M = 25,710, SD = 6078.79), which was the smallest. Furthermore, pilots with better SA performance showed lower perceived workload (M = 30.60, SD = 17.86), and pilots with poor SA performance showed higher perceived workload (M = 60.77, SD = 12.72). Pilots’ percentage of fixation and average fixation duration among five different areas of interest showed significant differences as well. Discussion: Eye-tracking devices can aid in capturing pilots’ visual scan patterns and SA performance, unlike traditional flight simulators. Therefore, integrating eye-tracking devices into the simulator may be a useful method for promoting SA training in flight operations, and can provide in-depth understanding of the mechanism of visual scan patterns and information processing to improve training effectiveness in aviation
Pointing Without a Pointer
We present a method for performing selection tasks based on continuous control of multiple, competing agents who try to determine the user's intentions from their control behaviour without requiring an explicit pointer. The entropy in the selection process decreases in a continuous fashion -- we provide experimental evidence of selection from 500 initial targets. The approach allows adaptation over time to best make use of the multimodal communication channel between the human and the system. This general approach is well suited to mobile and wearable applications, shared displays and security conscious settings
An Empirical Evaluation On Vibrotactile Feedback For Wristband System
With the rapid development of mobile computing, wearable wrist-worn is
becoming more and more popular. But the current vibrotactile feedback patterns
of most wrist-worn devices are too simple to enable effective interaction in
nonvisual scenarios. In this paper, we propose the wristband system with four
vibrating motors placed in different positions in the wristband, providing
multiple vibration patterns to transmit multi-semantic information for users in
eyes-free scenarios. However, we just applied five vibrotactile patterns in
experiments (positional up and down, horizontal diagonal, clockwise circular,
and total vibration) after contrastive analyzing nine patterns in a pilot
experiment. The two experiments with the same 12 participants perform the same
experimental process in lab and outdoors. According to the experimental
results, users can effectively distinguish the five patterns both in lab and
outside, with approximately 90% accuracy (except clockwise circular vibration
of outside experiment), proving these five vibration patterns can be used to
output multi-semantic information. The system can be applied to eyes-free
interaction scenarios for wrist-worn devices.Comment: 10 pages
- …