9,414 research outputs found

    Facial and Bodily Expressions for Control and Adaptation of Games (ECAG 2008)

    Get PDF

    CGAMES'2009

    Get PDF

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    WHERE DO YOU LOOK? RELATING VISUAL ATTENTION TO LEARNING OUTCOMES AND URL PARSING

    Get PDF
    Visual behavior provides a dynamic trail of where attention is directed. It is considered the behavioral interface between engagement and gaining information, and researchers have used it for several decades to study user\u27s behavior. This thesis focuses on employing visual attention to understand user\u27s behavior in two contexts: 3D learning and gauging URL safety. Such understanding is valuable for improving interactive tools and interface designs. In the first chapter, we present results from studying learners\u27 visual behavior while engaging with tangible and virtual 3D representations of objects. This is a replication of a recent study, and we extended it using eye tracking. By analyzing the visual behavior, we confirmed the original study results and added more quantitative explanations for the corresponding learning outcomes. Among other things, our results indicated that the users allocate similar visual attention while analyzing virtual and tangible learning material. In the next chapter, we present a user study\u27s outcomes wherein participants are instructed to classify a set of URLs wearing an eye tracker. Much effort is spent on teaching users how to detect malicious URLs. There has been significantly less focus on understanding exactly how and why users routinely fail to vet URLs properly. This user study aims to fill the void by shedding light on the underlying processes that users employ to gauge the UR L\u27s trustworthiness at the time of scanning. Our findings suggest that users have a cap on the amount of cognitive resources they are willing to expend on vetting a URL. Also, they tend to believe that the presence of www in the domain name indicates that the URL is safe

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    • …
    corecore