6,426 research outputs found

    Can Gaze Beat Touch? A Fitts' Law Evaluation of Gaze, Touch, and Mouse Inputs

    Full text link
    Gaze input has been a promising substitute for mouse input for point and select interactions. Individuals with severe motor and speech disabilities primarily rely on gaze input for communication. Gaze input also serves as a hands-free input modality in the scenarios of situationally-induced impairments and disabilities (SIIDs). Hence, the performance of gaze input has often been compared to mouse input through standardized performance evaluation procedure like the Fitts' Law. With the proliferation of touch-enabled devices such as smartphones, tablet PCs, or any computing device with a touch surface, it is also important to compare the performance of gaze input to touch input. In this study, we conducted ISO 9241-9 Fitts' Law evaluation to compare the performance of multimodal gaze and foot-based input to touch input in a standard desktop environment, while using mouse input as the baseline. From a study involving 12 participants, we found that the gaze input has the lowest throughput (2.55 bits/s), and the highest movement time (1.04 s) of the three inputs. In addition, though touch input involves maximum physical movements, it achieved the highest throughput (6.67 bits/s), the least movement time (0.5 s), and was the most preferred input. While there are similarities in how quickly pointing can be moved from source to target location when using both gaze and touch inputs, target selection consumes maximum time with gaze input. Hence, with a throughput that is over 160% higher than gaze, touch proves to be a superior input modality

    Effectiveness of Eye-Gaze Input System -Identification of Conditions that Assures High Pointing Accuracy and Movement Directional Effect-

    Get PDF
    The condition under which high accuracy is assured when using an eye-gaze input system was identified. It was also investigated how direction of eye movement affected the performance of an eye-gaze input system. Here, age, the arrangement of targets (vertical and horizontal), the size of a target, and the distance between adjacent rectangles were selected as experimental factors. The difference of pointing velocity between a mouse and an eyegaze input system was larger for older adults than for young adults. Thus, an eye-gaze input system was found to be effective especially for older adults. An eye-gaze input system might compensate for the declined motor functions of older adults. The pointing accuracy of an eye-gaze input system was higher in horizontal arrangement than in vertical arrangement. The distance between targets of more than 20 pixels was found to be desirable for both vertical and horizontal arrangements. For both the vertical and horizontal arrangements, the target size of more than 40pixels led to higher accuracy and faster pointing time for both young and older adults. For both age groups, it tended that the pointing time for the lower direction was longer than that for other directions

    Cross-device gaze-supported point-to-point content transfer

    Get PDF
    Within a pervasive computing environment, we see content on shared displays that we wish to acquire and use in a specific way i.e., with an application on a personal device, transferring from point-to-point. The eyes as input can indicate intention to interact with a service, providing implicit pointing as a result. In this paper we investigate the use of gaze and manual input for the positioning of gaze-acquired content on personal devices. We evaluate two main techniques, (1) Gaze Positioning, transfer of content using gaze with manual input to confirm actions, (2) Manual Positioning, content is selected with gaze but final positioning is performed by manual input, involving a switch of modalities from gaze to manual input. A first user study compares these techniques applied to direct and indirect manual input configurations, a tablet with touch input and a laptop with mouse input. A second study evaluated our techniques in an application scenario involving distractor targets. Our overall results showed general acceptance and understanding of all conditions, although there were clear individual user preferences dependent on familiarity and preference toward gaze, touch, or mouse input
    • 

    corecore