4,096 research outputs found

    Feasibility analysis study of battlefield distributed simulation - developmental (BDS-D) Version 1.0 system testbed extension : Fidelity and verification validation and accreditation

    Get PDF
    Issued as Report, Project E-16-M96 (subproject: A-9606)Report has title: Feasibility analysis study of battlefield distributed simulation - developmental (BDS-D) Version 1.0 system testbed extension : Fidelity and verification validation and accreditatio

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Eye tracking observers during color image evaluation tasks

    Get PDF
    This thesis investigated eye movement behavior of subjects during image-quality evaluation and chromatic adaptation tasks. Specifically, the objectives focused on learning where people center their attention during color preference judgments, examining the differences between paired comparison, rank order, and graphical rating tasks, and determining what strategies are adopted when selecting or adjusting achromatic regions on a soft-copy display. In judging the most preferred image, measures of fixation duration showed that observers spend about 4 seconds per image in the rank order task, 1.8 seconds per image in the paired comparison task, and 3.5 seconds per image in the graphical rating task. Spatial distributions of fixations across the three tasks were highly correlated in four of the five images. Peak areas of attention gravitated toward faces and semantic features. Introspective report was not always consistent with where people foveated, implying broader regions of importance than eye movement plots. Psychophysical results across these tasks generated similar, but not identical, scale values for three of the five images. The differences in scales are likely related to statistical treatment and image confusability, rather than eye movement behavior. In adjusting patches to appear achromatic, about 95% of the total adjustment time was spent fixating only on the patch. This result shows that even when participants are free to move their eyes in this kind of task, central adjustment patches can discourage normal image viewing behavior. When subjects did look around (less than 5% of the time), they did so early during the trial. Foveations were consistently directed toward semantic features, not shadows or achromatic surfaces. This result shows that viewers do not seek out near-neutral objects to ensure that their patch adjustments appear achromatic in the context of the scene. They also do not scan the image in order to adapt to a gray world average. As demonstrated in other studies, the mean chromaticity of the image influenced observers\u27 patch adjustments. Adaptation to the D93 white point was about 65% complete from D65. This result agrees reasonably with the time course of adaptation occurring over a 20 to 30 second exposure to the adapting illuminant. In selecting the most achromatic regions in the image, viewers spent 60% of the time scanning the scene. Unlike the achromatic patch adjustment task, foveations were consistently directed toward achromatic regions and near-neutral objects as would be expected. Eye movement records show behavior similar to what is expected from a visual search task
    • …
    corecore