309,184 research outputs found

    Modelling human-computer interaction

    Get PDF
    Human-computer interaction (HCI) can effectively be understood as a continuous process of cyclic interaction between the user and the environment. The action the user takes leads to changes to the system or the environment. These are evaluated by the user, and then this evaluation results in changes to goals, and then the user takes another action based on the changes to goals. In order to effectively describe the continuous process of cyclic interaction, a notation that a user interface designer could reason about the interactivity is needed. This paper claims that a cyclic notation is able to account for the intimate connection between goal, action and the environment, allowing a user interface designer to make explicit what a process achieves, as well as what triggers that process. It is thus possible for designers to build interactive versions of the designs so as to assess the assumptions made or being made regarding the interaction between the user and the system

    Model of Coordination Flow in Remote Collaborative Interaction

    Get PDF
    © 2015 IEEEWe present an information-theoretic approach for modelling coordination in human-human interaction and measuring coordination flows in a remote collaborative tracking task. Building on Shannon's mutual information, coordination flow measures, for stochastic collaborative systems, how much influence, the environment has on the joint control of collaborating parties. We demonstrate the application of the approach on interactive human data recorded in a user study and reveal the amount of effort required for creating rigorous models. Our initial results suggest the potential coordination flow has - as an objective, task-independent measure - in supporting designers of human collaborative systems and in providing better theoretical foundations for the science of Human-Computer Interaction

    Computers that smile: Humor in the interface

    Get PDF
    It is certainly not the case that wen we consider research on the role of human characteristics in the user interface of computers that no attention has been paid to the role of humor. However, when we compare efforts in this area with efforts and experiments that attempt to demonstrate the positive role of general emotion modelling in the user interface, then we must conclude that this attention is still low. As we all know, sometimes the computer is a source of frustration rather than a source of enjoyment. And indeed we see research projects that aim at recognizing a user’s frustration, rather than his enjoyment. However, rather than detecting frustration, and maybe reacting on it in a humorous way, we would like to prevent frustration by making interaction with a computer more natural and more enjoyable. For that reason we are working on multimodal interaction and embodied conversational agents. In the interaction with embodied conversational agents verbal and nonverbal communication are equally important. Multimodal emotion display and detection are among our advanced research issues, and investigations in the role of humor in human-computer interaction is one of them

    Multi-agent evolutionary systems for the generation of complex virtual worlds

    Full text link
    Modern films, games and virtual reality applications are dependent on convincing computer graphics. Highly complex models are a requirement for the successful delivery of many scenes and environments. While workflows such as rendering, compositing and animation have been streamlined to accommodate increasing demands, modelling complex models is still a laborious task. This paper introduces the computational benefits of an Interactive Genetic Algorithm (IGA) to computer graphics modelling while compensating the effects of user fatigue, a common issue with Interactive Evolutionary Computation. An intelligent agent is used in conjunction with an IGA that offers the potential to reduce the effects of user fatigue by learning from the choices made by the human designer and directing the search accordingly. This workflow accelerates the layout and distribution of basic elements to form complex models. It captures the designer's intent through interaction, and encourages playful discovery

    On human motion prediction using recurrent neural networks

    Full text link
    Human motion modelling is a classical problem at the intersection of graphics and computer vision, with applications spanning human-computer interaction, motion synthesis, and motion prediction for virtual and augmented reality. Following the success of deep learning methods in several computer vision tasks, recent work has focused on using deep recurrent neural networks (RNNs) to model human motion, with the goal of learning time-dependent representations that perform tasks such as short-term motion prediction and long-term human motion synthesis. We examine recent work, with a focus on the evaluation methodologies commonly used in the literature, and show that, surprisingly, state-of-the-art performance can be achieved by a simple baseline that does not attempt to model motion at all. We investigate this result, and analyze recent RNN methods by looking at the architectures, loss functions, and training procedures used in state-of-the-art approaches. We propose three changes to the standard RNN models typically used for human motion, which result in a simple and scalable RNN architecture that obtains state-of-the-art performance on human motion prediction.Comment: Accepted at CVPR 1

    Psycho-Physiologically-Based Real Time Adaptive General Type 2 Fuzzy Modelling and Self-Organising Control of Operator's Performance Undertaking a Cognitive Task

    Get PDF
    —This paper presents a new modelling and control fuzzy-based framework validated with real-time experiments on human participants experiencing stress via mental arithmetic cognitive tasks identified through psycho-physiological markers. The ultimate aim of the modelling/control framework is to prevent performance breakdown in human-computer interactive systems with a special focus on human performance. Two designed modelling/control experiments which consist of carrying-out arithmetic operations of varying difficulty levels were performed by 10 participants (operators) in the study. With this new technique, modelling is achieved through a new adaptive, self-organizing and interpretable modelling framework based on General Type-2 Fuzzy sets. This framework is able to learn in real-time through the implementation of a re-structured performance-learning algorithm that identifies important features in the data without the need for prior training. The information learnt by the model is later exploited via an Energy Model Based Controller that infers adequate control actions by changing the difficulty level of the arithmetic operations in the human-computer-interaction system; these actions being based on the most current psycho-physiological state of the subject under study. The real-time implementation of the proposed modelling and control configurations for the human-machine-interaction under study shows superior performance as compared to other forms of modelling and control, with minimal intervention in terms of model re-training or parameter re-tuning to deal with uncertainties, disturbances and inter/intra-subject parameter variability
    • …
    corecore