6,971 research outputs found

    Using Variable Dwell Time to Accelerate Gaze-Based Web Browsing with Two-Step Selection

    Full text link
    In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose a gaze-based browser using a two-step selection policy with variable dwell time. In the first step, a command, e.g. "back" or "select", is chosen from a menu using a dwell time that is constant across the different commands. In the second step, if the "select" command is chosen, the user selects a hyperlink using a dwell time that varies between different hyperlinks. We assign shorter dwell times to more likely hyperlinks and longer dwell times to less likely hyperlinks. In order to infer the likelihood each hyperlink will be selected, we have developed a probabilistic model of natural gaze behavior while surfing the web. We have evaluated a number of heuristic and probabilistic methods for varying the dwell times using both simulation and experiment. Our results demonstrate that varying dwell time improves the user experience in comparison with fixed dwell time, resulting in fewer errors and increased speed. While all of the methods for varying dwell time resulted in improved performance, the probabilistic models yielded much greater gains than the simple heuristics. The best performing model reduces error rate by 50% compared to 100ms uniform dwell time while maintaining a similar response time. It reduces response time by 60% compared to 300ms uniform dwell time while maintaining a similar error rate.Comment: This is an Accepted Manuscript of an article published by Taylor & Francis in the International Journal of Human-Computer Interaction on 30 March, 2018, available online: http://www.tandfonline.com/10.1080/10447318.2018.1452351 . For an eprint of the final published article, please access: https://www.tandfonline.com/eprint/T9d4cNwwRUqXPPiZYm8Z/ful

    Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects

    Get PDF
    These are the Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects

    Research on Image Retrieval Optimization Based on Eye Movement Experiment Data

    Get PDF
    Satisfying a user's actual underlying needs in the image retrieval process is a difficult challenge facing image retrieval technology. The aim of this study is to improve the performance of a retrieval system and provide users with optimized search results using the feedback of eye movement. We analyzed the eye movement signals of the userā€™s image retrieval process from cognitive and mathematical perspectives. Data collected for 25 designers in eye tracking experiments were used to train and evaluate the model. In statistical analysis, eight eye movement features were statistically significantly different between selected and unselected groups of images (p < 0.05). An optimal selection of input features resulted in overall accuracy of the support vector machine prediction model of 87.16%. Judging the userā€™s requirements in the image retrieval process through eye movement behaviors was shown to be effective

    The Use of Eye-tracking in Information Systems Research: A Literature Review of the Last Decade

    Get PDF
    Eye-trackers provide continuous information on individualsā€™ gaze behavior. Due to the increasing popularity of eye- tracking in the information systems (IS) field, we reviewed how past research has used eye-tracking to inform future research. Accordingly, we conducted a literature review to describe the use of eye-tracking in IS research based on a sample of 113 empirical papers published since 2008 in IS journals and conference proceedings. Specifically, we examined the methodologies and experimental settings used in eye-tracking IS research and how eye-tracking can be used to inform the IS field. We found that IS research that used eye-tracking varies in its methodological and theoretical complexity. Research on pattern analysis shows promise since such research develops a broader range of analysis methodologies. The potential of eye-tracking remains unfulfilled in the IS field since past research has mostly focused on attention-related constructs and used fixation count metrics on desktop computers. We call for researchers to utilize eye-tracking more broadly in IS research by extending the type of metrics they use, the analyses they perform, and the constructs they investigate

    Eye-tracking assistive technologies for individuals with amyotrophic lateral sclerosis

    Get PDF
    Amyotrophic lateral sclerosis, also known as ALS, is a progressive nervous system disorder that affects nerve cells in the brain and spinal cord, resulting in the loss of muscle control. For individuals with ALS, where mobility is limited to the movement of the eyes, the use of eye-tracking-based applications can be applied to achieve some basic tasks with certain digital interfaces. This paper presents a review of existing eye-tracking software and hardware through which eye-tracking their application is sketched as an assistive technology to cope with ALS. Eye-tracking also provides a suitable alternative as control of game elements. Furthermore, artificial intelligence has been utilized to improve eye-tracking technology with significant improvement in calibration and accuracy. Gaps in literature are highlighted in the study to offer a direction for future research

    Do You Need Instructions Again? Predicting Wayfinding Instruction Demand

    Get PDF
    The demand for instructions during wayfinding, defined as the frequency of requesting instructions for each decision point, can be considered as an important indicator of the internal cognitive processes during wayfinding. This demand can be a consequence of the mental state of feeling lost, being uncertain, mind wandering, having difficulty following the route, etc. Therefore, it can be of great importance for theoretical cognitive studies on human perception of the environment. From an application perspective, this demand can be used as a measure of the effectiveness of the navigation assistance system. It is therefore worthwhile to be able to predict this demand and also to know what factors trigger it. This paper takes a step in this direction by reporting a successful prediction of instruction demand (accuracy of 78.4%) in a real-world wayfinding experiment with 45 participants, and interpreting the environmental, user, instructional, and gaze-related features that caused it

    Detecting Relevance during Decision-Making from Eye Movements for UI Adaptation

    Full text link
    This paper proposes an approach to detect information relevance during decision-making from eye movements in order to enable user interface adaptation. This is a challenging task because gaze behavior varies greatly across individual users and tasks and groundtruth data is difficult to obtain. Thus, prior work has mostly focused on simpler target-search tasks or on establishing general interest, where gaze behavior is less complex. From the literature, we identify six metrics that capture different aspects of the gaze behavior during decision-making and combine them in a voting scheme. We empirically show, that this accounts for the large variations in gaze behavior and out-performs standalone metrics. Importantly, it offers an intuitive way to control the amount of detected information, which is crucial for different UI adaptation schemes to succeed. We show the applicability of our approach by developing a room-search application that changes the visual saliency of content detected as relevant. In an empirical study, we show that it detects up to 97% of relevant elements with respect to user self-reporting, which allows us to meaningfully adapt the interface, as confirmed by participants. Our approach is fast, does not need any explicit user input and can be applied independent of task and user.Comment: The first two authors contributed equally to this wor

    Getting the Most from Eye-Tracking: User-Interaction Based Reading Region Estimation Dataset and Models

    Full text link
    A single digital newsletter usually contains many messages (regions). Users' reading time spent on, and read level (skip/skim/read-in-detail) of each message is important for platforms to understand their users' interests, personalize their contents, and make recommendations. Based on accurate but expensive-to-collect eyetracker-recorded data, we built models that predict per-region reading time based on easy-to-collect Javascript browser tracking data. With eye-tracking, we collected 200k ground-truth datapoints on participants reading news on browsers. Then we trained machine learning and deep learning models to predict message-level reading time based on user interactions like mouse position, scrolling, and clicking. We reached 27\% percentage error in reading time estimation with a two-tower neural network based on user interactions only, against the eye-tracking ground truth data, while the heuristic baselines have around 46\% percentage error. We also discovered the benefits of replacing per-session models with per-timestamp models, and adding user pattern features. We concluded with suggestions on developing message-level reading estimation techniques based on available data.Comment: Ruoyan Kong, Ruixuan Sun, Charles Chuankai Zhang, Chen Chen, Sneha Patri, Gayathri Gajjela, and Joseph A. Konstan. Getting the most from eyetracking: User-interaction based reading region estimation dataset and models. In Proceedings of the 2023 Symposium on Eye Tracking Research and Applications, ETRA 23, New York, NY, USA, 2023. Association for Computing Machiner
    • ā€¦
    corecore