28,369 research outputs found

    Gameplay experience in a gaze interaction game

    Full text link
    Assessing gameplay experience for gaze interaction games is a challenging task. For this study, a gaze interaction Half-Life 2 game modification was created that allowed eye tracking control. The mod was deployed during an experiment at Dreamhack 2007, where participants had to play with gaze navigation and afterwards rate their gameplay experience. The results show low tension and negative affects scores on the gameplay experience questionnaire as well as high positive challenge, immersion and flow ratings. The correlation between spatial presence and immersion for gaze interaction was high and yields further investigation. It is concluded that gameplay experience can be correctly assessed with the methodology presented in this paper.Comment: pages 49-54, The 5th Conference on Communication by Gaze Interaction - COGAIN 2009: Gaze Interaction For Those Who Want It Most, ISBN: 978-87-643-0475-

    VRpursuits: Interaction in Virtual Reality Using Smooth Pursuit Eye Movements

    Get PDF
    Gaze-based interaction using smooth pursuit eye movements (Pursuits) is attractive given that it is intuitive and overcomes the Midas touch problem. At the same time, eye tracking is becoming increasingly popular for VR applications. While Pursuits was shown to be effective in several interaction contexts, it was never explored in-depth for VR before. In a user study (N=26), we investigated how parameters that are specific to VR settings influence the performance of Pursuits. For example, we found that Pursuits is robust against different sizes of virtual 3D targets. However performance improves when the trajectory size (e.g., radius) is larger, particularly if the user is walking while interacting. While walking, selecting moving targets via Pursuits is generally feasible albeit less accurate than when stationary. Finally, we discuss the implications of these findings and the potential of smooth pursuits for interaction in VR by demonstrating two sample use cases: 1) gaze-based authentication in VR, and 2) a space meteors shooting game

    Exploring the Front Touch Interface for Virtual Reality Headsets

    Full text link
    In this paper, we propose a new interface for virtual reality headset: a touchpad in front of the headset. To demonstrate the feasibility of the front touch interface, we built a prototype device, explored VR UI design space expansion, and performed various user studies. We started with preliminary tests to see how intuitively and accurately people can interact with the front touchpad. Then, we further experimented various user interfaces such as a binary selection, a typical menu layout, and a keyboard. Two-Finger and Drag-n-Tap were also explored to find the appropriate selection technique. As a low-cost, light-weight, and in low power budget technology, a touch sensor can make an ideal interface for mobile headset. Also, front touch area can be large enough to allow wide range of interaction types such as multi-finger interactions. With this novel front touch interface, we paved a way to new virtual reality interaction methods

    Human factors in entertainment computing: designing for diversity

    Get PDF
    Although several casual gaming systems have been developed during the past years, little research examining the impact of human factors on the design and use of digital games has been carried out, and commercially available games are only partially suitable for audiences with special needs. The research project described within this paper aims to analyze and explore design guidelines for diverse audiences and results of focus group gaming sessions to develop a research toolbox allowing for the easy creation of adaptable and accessible game scenarios. Thereby, a controllable environment for the detailed evaluation of the interrelations between human factors and entertainment systems is provided. Results obtained by further testing will be integrated in the toolbox, and may foster the development of accessible games, thus opening up new opportunities for diverse audiences and allowing them to further engage in digital games. Copyright 2011 ACM

    Learn to Interpret Atari Agents

    Full text link
    Deep Reinforcement Learning (DeepRL) agents surpass human-level performances in a multitude of tasks. However, the direct mapping from states to actions makes it hard to interpret the rationale behind the decision making of agents. In contrast to previous a-posteriori methods of visualizing DeepRL policies, we propose an end-to-end trainable framework based on Rainbow, a representative Deep Q-Network (DQN) agent. Our method automatically learns important regions in the input domain, which enables characterizations of the decision making and interpretations for non-intuitive behaviors. Hence we name it Region Sensitive Rainbow (RS-Rainbow). RS-Rainbow utilizes a simple yet effective mechanism to incorporate visualization ability into the learning model, not only improving model interpretability, but leading to improved performance. Extensive experiments on the challenging platform of Atari 2600 demonstrate the superiority of RS-Rainbow. In particular, our agent achieves state of the art at just 25% of the training frames. Demonstrations and code are available at https://github.com/yz93/Learn-to-Interpret-Atari-Agents

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    Interaction Histories and Short-Term Memory: Enactive Development of Turn-Taking Behaviours in a Childlike Humanoid Robot

    Get PDF
    In this article, an enactive architecture is described that allows a humanoid robot to learn to compose simple actions into turn-taking behaviours while playing interaction games with a human partner. The robot’s action choices are reinforced by social feedback from the human in the form of visual attention and measures of behavioural synchronisation. We demonstrate that the system can acquire and switch between behaviours learned through interaction based on social feedback from the human partner. The role of reinforcement based on a short-term memory of the interaction was experimentally investigated. Results indicate that feedback based only on the immediate experience was insufficient to learn longer, more complex turn-taking behaviours. Therefore, some history of the interaction must be considered in the acquisition of turn-taking, which can be efficiently handled through the use of short-term memory.Peer reviewedFinal Published versio

    Virual world users evaluated according to environment design, task based adn affective attention measures

    Get PDF
    This paper presents research that engages with virtual worlds for education users to understand design of these applications for their needs. An in-depth multi-method investigation from 12 virtual worlds participants was undertaken in three stages; initially a small scale within-subjects eye-tracking comparison was made between the role playing game 'RuneScape' and the virtual social world 'Second Life', secondly an in-depth evaluation of eye-tracking data for Second Life tasks (i.e. avatar, object and world based) was conducted, finally a qualitative evaluation of Second Life tutorials in comparative 3D situations (i.e. environments that are; realistic to surreal, enclosed to open, formal to informal) was conducted. Initial findings identified increased users attention within comparable gaming and social world interactions. Further analysis identified that 3D world focused interactions increased participants' attention more than object and avatar tasks. Finally different 3D situation designs altered levels of task engagement and distraction through perceptions of comfort, fun and fear. Ultimately goal based and environment interaction tasks can increase attention and potentially immersion. However, affective perceptions of 3D situations can negatively impact on attention. An objective discussion of the limitations and benefits of virtual world immersion for student learning is presented
    corecore