1,503 research outputs found

    Inferring Intent from Interaction with Visualization

    Get PDF
    Today\u27s state-of-the-art analysis tools combine the human visual system and domain knowledge, with the machine\u27s computational power. The human performs the reasoning, deduction, hypothesis generation, and judgment. The entire burden of learning from the data usually rests squarely on the human user\u27s shoulders. This model, while successful in simple scenarios, is neither scalable nor generalizable. In this thesis, we propose a system that integrates advancements from artificial intelligence within a visualization system to detect the user\u27s goals. At a high level, we use hidden unobservable states to represent goals/intentions of users. We automatically infer these goals from passive observations of the user\u27s actions (e.g., mouse clicks), thereby allowing accurate predictions of future clicks. We evaluate this technique with a crime map and demonstrate that, depending on the type of task, users\u27 clicks appear in our prediction set 79\% -- 97\% of the time. Further analysis shows that we can achieve high prediction accuracy after only a short period (typically after three clicks). Altogether, we show that passive observations of interaction data can reveal valuable information about users\u27 high-level goals, laying the foundation for next-generation visual analytics systems that can automatically learn users\u27 intentions and support the analysis process proactively

    Using Variable Dwell Time to Accelerate Gaze-Based Web Browsing with Two-Step Selection

    Full text link
    In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose a gaze-based browser using a two-step selection policy with variable dwell time. In the first step, a command, e.g. "back" or "select", is chosen from a menu using a dwell time that is constant across the different commands. In the second step, if the "select" command is chosen, the user selects a hyperlink using a dwell time that varies between different hyperlinks. We assign shorter dwell times to more likely hyperlinks and longer dwell times to less likely hyperlinks. In order to infer the likelihood each hyperlink will be selected, we have developed a probabilistic model of natural gaze behavior while surfing the web. We have evaluated a number of heuristic and probabilistic methods for varying the dwell times using both simulation and experiment. Our results demonstrate that varying dwell time improves the user experience in comparison with fixed dwell time, resulting in fewer errors and increased speed. While all of the methods for varying dwell time resulted in improved performance, the probabilistic models yielded much greater gains than the simple heuristics. The best performing model reduces error rate by 50% compared to 100ms uniform dwell time while maintaining a similar response time. It reduces response time by 60% compared to 300ms uniform dwell time while maintaining a similar error rate.Comment: This is an Accepted Manuscript of an article published by Taylor & Francis in the International Journal of Human-Computer Interaction on 30 March, 2018, available online: http://www.tandfonline.com/10.1080/10447318.2018.1452351 . For an eprint of the final published article, please access: https://www.tandfonline.com/eprint/T9d4cNwwRUqXPPiZYm8Z/ful

    Mouse Gesture Recognition for Human Computer Interaction

    Get PDF
    In the field of Computer Science and Information Technology, it goes without saying that the focus has been shifted from the System Oriented software to the User Oriented software. Naturally, for such software applications, the importance of User Experience has gained a paradigm shift. The paper highlights the significance of a seamless interaction between the user and the computer by proposing a reliable algorithm for performing basic operations by drawing gestures with a mouse. It aims to embrace simplicity and quick access using gestures and providing effortless interaction for the uniquely abled users. The core of the algorithm comes from the Hidden Markov Model, which is emblematic of a probabilistic approach for gesture recognition. DOI: 10.17762/ijritcc2321-8169.15050

    Human desire inference process and analysis

    Get PDF
    Ubiquitous computing becomes a more fascinating research area since it may offer us an unobtrusive way to help users in their environments that integrate surrounding objects and activities. To date, there have been numerous studies focusing on how user\u27s activity can be identified and predicted, without considering motivation driving an action. However, understanding the underlying motivation is a key to activity analysis. On the other hand, user\u27s desires often generate motivations to engage activities in order to fulfill such desires. Thus, we must study user\u27s desires in order to provide proper services to make the life of users more comfortable. In this study, we present how to design and implement a computational model for inference of user\u27s desire. First, we devised a hierarchical desire inference process based on the Bayesian Belief Networks (BBNs), that considers the affective states, behavior contexts and environmental contexts of a user at given points in time to infer the user\u27s desire. The inferred desire of the highest probability from the BBNs is then used in the subsequent decision making. Second, we extended a probabilistic framework based on the Dynamic Bayesian Belief Networks (DBBNs) which model the observation sequences and information theory. A generic hierarchical probabilistic framework for desire inference is introduced to model the context information and the visual sensory observations. Also, this framework dynamically evolves to account for temporal change in context information along with the change in user\u27s desire. Third, we described what possible factors are relevant to determine user\u27s desire. To achieve this, a full-scale experiment has been conducted. Raw data from sensors were interpreted as context information. We observed the user\u27s activities and get user\u27s emotions as a part of input parameters. Throughout the experiment, a complete analysis was conducted whereas 30 factors were considered and most relevant factors were selectively chosen using correlation coefficient and delta value. Our results show that 11 factors (3 emotions, 7 behaviors and 1 location factor) are relevant to inferring user\u27s desire. Finally, we have established an evaluation environment within the Smart Home Lab to validate our approach. In order to train and verify the desire inference model, multiple stimuli are provided to induce user\u27s desires and pilot data are collected during the experiments. For evaluation, we used the recall and precision methodology, which are basic measures. As a result, average precision was calculated to be 85% for human desire inference and 81% for Think-Aloud
    • …
    corecore