49 research outputs found

    A Human Motor Behavior Model for Direct Pointing at a Distance

    Get PDF
    Models of human motor behavior are well known as an aid in the design of user interfaces (UIs). Most current models apply primarily to desktop interaction, but with the development of non-desktop UIs, new types of motor behaviors need to be modeled. Direct Pointing at a Distance is such a motor behavior. A model of direct pointing at a distance would be particularly useful in the comparison of different interaction techniques, because the performance of such techniques is highly dependent on user strategy, making controlled studies difficult to perform. Inspired by Fitts’ law, we studied four possible models and concluded that movement time for a direct pointing task is best described as a function of the angular amplitude of movement and the angular size of the target. Contrary to Fitts’ law, our model shows that the angular size has a much larger effect on movement time than the angular amplitude and that the growth in the difficulty of the tasks is quadratic, rather then linear. We estimated the model’s parameters experimentally with a correlation coefficient of 96%

    Human Performance Modeling For Two-Dimensional Dwell-Based Eye Pointing

    Get PDF
    Recently, Zhang et al. (2010) proposed an effective performance model for dwell-based eye pointing. However, their model was based on a specific circular target condition, without the ability to predict the performance of acquiring conventional rectangular targets. Thus, the applicability of such a model is limited. In this paper, we extend their one-dimensional model to two-dimensional (2D) target conditions. Carrying out two experiments, we have evaluated the abilities of different model candidates to find out the most appropriate one. The new index of difficulty we redefine for 2D eye pointing (IDeye) can properly reflect the asymmetrical impact of target width and height, which the later exceeds the former, and consequently the IDeyemodel can accurately predict the performance for 2D targets. Importantly, we also find that this asymmetry still holds for varying movement directions. According to the results of our study, we provide useful implications and recommendations for gaze-based interactions

    Implementing a Quantitative Analysis Design Tool for Future Generation Interfaces

    Get PDF
    The implementation of Multi-Aircraft Control (MAC) for use with Remotely Piloted Aircraft (RPA) has resulted in the need of a platform to evaluate interface design. The Vigilant Spirit Control Station (VSCS), developed by the Air Force Research Laboratory, addresses this need by permitting the rapid prototyping of different interface concepts for future MAC-enabled systems. A human-computer interaction (HCI) Index, originally applied to multi-function displays was applied to the prototype Vigilant Spirit interface. A modified version of the HCI Index was successfully applied to perform a quantitative analysis of the baseline VSCS interface and two modified interface designs. The modified HCI Index incorporates the Hick-Hyman decision time, Fitts\u27 Law time, and the physical actions calculated by the Keystroke-level model. The analysis indicates that the average time for the modified interfaces is statistically less than the average time of the original VSCS interface. These results revealed the effectiveness of the tool and demonstrated in the design of future generation interfaces or modifying existing interfaces

    Steering in layers above the display surface

    Get PDF
    Interaction techniques that use the layers above the display surface to extend the functionality of pen-based digitized surfaces continue to emerge. In such techniques, stylus movements are constrained by the bounds of a layer inside which the interaction is active, as well as constraints on the direction of movement within the layer. The problem addressed in this thesis is that designers currently have no model to predict movement time (MT) or quantify the difficulty, for movement (steering) in layers above the display surface constrained by thickness of the layer, its height above the display, and the width and length of the path. The problem has two main parts: first, how to model steering in layers, and second, how to visualize the layers to provide feedback for the steering task. The solution described is a model that predicts movement time and that quantifies the difficulty of steering through constrained and unconstrained paths in layers above the display surface. Through a series of experiments we validated the derivation and applicability of the proposed models. A predictive model is necessary because the model serves as the basis for design of interaction techniques in the design space; and predictive models can be used for quantitative evaluation of interaction techniques. The predictive models are important as they allow researchers to evaluate potential solutions independent of experimental conditions.Addressing the second part of the problem, we describe four visualization designs using cursors. We evaluated the effectiveness of the visualization by conducting a controlled experiment

    Prediction of user action in moving-target selection tasks

    Get PDF
    Selection of moving targets is a common task in human–computer interaction (HCI), and more specifically in virtual reality (VR). In spite of the increased number of applications involving moving–target selection, HCI and VR studies have largely focused on static-target selection. Compared to its static-target counterpart, however, moving-target selection poses special challenges, including the need to continuously and simultaneously track the target and plan to reach for it, which may be difficult depending on the user’s reactiveness and the target’s movement. Action prediction has proven to be the most comprehensive enhancement to address moving-target selection challenges. Current predictive techniques, however, heavily rely on continuous tracking of user actions, without considering the possibility that target-reaching actions may have a dominant pre-programmed component—this theory is known as the pre-programmed control theory. Thus, based on the pre-programmed control theory, this research explores the possibility of predicting moving-target selection prior to action execution. Specifically, three levels of action prediction are investigated: action performance, prospective action difficulty, and intention. The proposed performance models predict the movement time (MT) required to reach for a moving target in 2-D and 3-D space, and are useful to compare users and interfaces objectively. The prospective difficulty (PD) models predict the subjective effort required to reach for a moving target, without actually executing the action, and can therefore be measured when performance can not. Finally, the intention models predict the target that the user plans to select, and can therefore be used to facilitate the selection of the intended target. Intention prediction models are developed using decision trees and scoring functions, and evaluated in two VR studies: the first investigates undirected selection (i.e., tasks in which the users are free to select an object among multiple others), and the second directed selection (i.e., the more common experimental task in which users are instructed to select a specific object). PD models for 1-D, and 2-D moving-target selection tasks are developed based on Fitts’ Law, and evaluated in an online experiment. Finally, MT models with the same structural form of the aforementioned PD models are evaluated in a 3-D moving-target selection experiment deployed in VR. Aside from intention predictions on directed selection, all of the explored models yield relatively high accuracies—up to ~78% predicting intended targets in undirected tasks, R^2 = .97 predicting PD, and R^2 = .93 predicting MT

    Personalised tiling paradigm for motor impaired users

    Get PDF

    Nomadic input on mobile devices: the influence of touch input technique and walking speed on performance and offset modeling

    Get PDF
    In everyday life people use their mobile phones on-the-go with different walking speeds and with different touch input techniques. Unfortunately, much of the published research in mobile interaction does not quantify the influence of these variables. In this paper, we analyze the influence of walking speed, gait pattern and input techniques on commonly used performance parameters like error rate, accuracy and tapping speed, and we compare the results to the static condition. We examine the influence of these factors on the machine learned offset model used to correct user input and we make design recommendations. The results show that all performance parameters degraded when the subject started to move, for all input techniques. Index finger pointing techniques demonstrated overall better performance compared to thumb-pointing techniques. The influence of gait phase on tap event likelihood and accuracy was demonstrated for all input techniques and all walking speeds. Finally, it was shown that the offset model built on static data did not perform as well as models inferred from dynamic data, which indicates the speed-specific nature of the models. Also, models identified using specific input techniques did not perform well when tested in other conditions, demonstrating the limited validity of offset models to a particular input technique. The model was therefore calibrated using data recorded with the appropriate input technique, at 75% of preferred walking speed, which is the speed to which users spontaneously slow down when they use a mobile device and which presents a tradeoff between accuracy and usability. This led to an increase in accuracy compared to models built on static data. The error rate was reduced between 0.05% and 5.3% for landscape-based methods and between 5.3% and 11.9% for portrait-based methods

    Viewport- and World-based Personal Device Point-Select Interactions in the Augmented Reality

    Get PDF
    Personal smart devices have demonstrated a variety of efficient techniques for pointing and selecting on physical displays. However, when migrating these input techniques to augmented reality, it is both unclear what the relative performance of different techniques will be given the immersive nature of the environment, and it is unclear how viewport-based versus world-based pointing methods will impact performance. To better understand the impact of device and viewing perspectives on pointing in augmented reality, in this thesis, we present the results of two controlled experiments comparing pointing conditions that leverage various smartphone- and smartwatch-based external display pointing techniques and examine viewport-based versus world-based target acquisition paradigms. Our results demonstrate that viewport-based techniques offer faster selection and that both smartwatch- and smartphone-based pointing techniques represent high-performance options for performing distant target acquisition tasks in augmented reality
    corecore