625 research outputs found

    Smoothness perception : investigation of beat rate effect on frame rate perception

    Get PDF
    Despite the complexity of the Human Visual System (HVS), research over the last few decades has highlighted a number of its limitations. These limitations can be exploited in computer graphics to significantly reduce computational cost and thus required rendering time, without a viewer perceiving any difference in resultant image quality. Furthermore, cross-modal interaction between different modalities, such as the influence of audio on visual perception, has also been shown as significant both in psychology and computer graphics. In this paper we investigate the effect of beat rate on temporal visual perception, i.e. frame rate perception. For the visual quality and perception evaluation, a series of psychophysical experiments was conducted and the data analysed. The results indicate that beat rates in some cases do affect temporal visual perception and that certain beat rates can be used in order to reduce the amount of rendering required to achieve a perceptual high quality. This is another step towards a comprehensive understanding of auditory-visual cross-modal interaction and could be potentially used in high-fidelity interactive multi-sensory virtual environments

    Auditory-visual interaction in computer graphics

    Get PDF
    Generating high-fidelity images in real-time at reasonable frame rates, still remains one of the main challenges in computer graphics. Furthermore, visuals remain only one of the multiple sensory cues that are required to be delivered simultaneously in a multi-sensory virtual environment. The most frequently used sense, besides vision, in virtual environments and entertainment, is audio. While the rendering community focuses on solving the rendering equation more quickly using various algorithmic and hardware improvements, the exploitation of human limitations to assist in this process remain largely unexplored. Many findings in the research literature prove the existence of physical and psychological limitations of humans, including attentional, perceptual and limitations of the Human Sensory System (HSS). Knowledge of the Human Visual System (HVS) may be exploited in computer graphics to significantly reduce rendering times without the viewer being aware of any resultant image quality difference. Furthermore, cross-modal effects, that is the influence of one sensory input on another, for example sound and visuals, have also recently been shown to have a substantial impact on viewer perception of virtual environment. In this thesis, auditory-visual cross-modal interaction research findings have been investigated and adapted to graphics rendering purposes. The results from five psychophysical experiments, involving 233 participants, showed that, even in the realm of computer graphics, there is a strong relationship between vision and audition in both spatial and temporal domains. The first experiment, investigating the auditory-visual cross-modal interaction within spatial domain, showed that unrelated sound effects reduce perceived rendering quality threshold. In the following experiments, the effect of audio on temporal visual perception was investigated. The results obtained indicate that audio with certain beat rates can be used in order to reduce the amount of rendering required to achieve a perceptual high quality. Furthermore, introducing the sound effect of footsteps to walking animations increased the visual smoothness perception. These results suggest that for certain conditions the number of frames that need to be rendered each second can be reduced, saving valuable computation time, without the viewer being aware of this reduction. This is another step towards a comprehensive understanding of auditory-visual cross-modal interaction and its use in high-fidelity interactive multi-sensory virtual environments

    Multi-Modal Perception for Selective Rendering

    Get PDF
    A major challenge in generating high-fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high-fidelity simulation of light and sound is still unachievable in real-time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high-fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialised directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilised in selective rendering pipelines via the use of multi-modal maps. The multi-modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi-modal virtual environments

    The influence of olfaction on the perception of high-fidelity computer graphics

    Get PDF
    The computer graphics industry is constantly demanding more realistic images and animations. However, producing such high quality scenes can take a long time, even days, if rendering on a single PC. One of the approaches that can be used to speed up rendering times is Visual Perception, which exploits the limitations of the Human Visual System, since the viewers of the results will be humans. Although there is an increasing body of research into how haptics and sound may affect a viewer's perception in a virtual environment, the in uence of smell has been largely ignored. The aim of this thesis is to address this gap and make smell an integral part of multi-modal virtual environments. In this work, we have performed four major experiments, with a total of 840 participants. In the experiments we used still images and animations, related and unrelated smells and finally, a multi-modal environment was considered with smell, sound and temperature. Beside this, we also investigated how long it takes for an average person to adapt to smell and what affect there may be when performing a task in the presence of a smell. The results of this thesis clearly show that a smell present in the environment firstly affects the perception of object quality within a rendered image, and secondly, enables parts of the scene or the whole animation to be selectively rendered in high quality while the rest can be rendered in a lower quality without the viewer noticing the drop in quality. Such selective rendering in the presence of smell results in significant computational performance gains without any loss in the quality of the image or animations perceived by a viewer

    Animation in relational information visualization

    Get PDF
    In order to be able to navigate in the world without memorizing each detail, the human brain builds a mental map of its environment. The mental map is a distorted and abstracted representation of the real environment. Unimportant areas tend to be collapsed to a single entity while important landmarks are overemphasized. When working with visualizations of data we build a mental map of the data which is closely linked to the particular visualization. If the visualization changes significantly due to changes in the data or the way it is presented we loose the mental map and have to rebuild it from scratch. The purpose of the research underlying this thesis was to investigate and devise methods to create smooth transformations between visualizations of relational data which help users in maintaining or quickly updating their mental map

    Animation in relational information visualization

    Get PDF
    In order to be able to navigate in the world without memorizing each detail, the human brain builds a mental map of its environment. The mental map is a distorted and abstracted representation of the real environment. Unimportant areas tend to be collapsed to a single entity while important landmarks are overemphasized. When working with visualizations of data we build a mental map of the data which is closely linked to the particular visualization. If the visualization changes significantly due to changes in the data or the way it is presented we loose the mental map and have to rebuild it from scratch. The purpose of the research underlying this thesis was to investigate and devise methods to create smooth transformations between visualizations of relational data which help users in maintaining or quickly updating their mental map

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    The Importance of Multimodal Emotion Conditioning and Affect Consistency for Embodied Conversational Agents

    Full text link
    Previous studies regarding the perception of emotions for embodied virtual agents have shown the effectiveness of using virtual characters in conveying emotions through interactions with humans. However, creating an autonomous embodied conversational agent with expressive behaviors presents two major challenges. The first challenge is the difficulty of synthesizing the conversational behaviors for each modality that are as expressive as real human behaviors. The second challenge is that the affects are modeled independently, which makes it difficult to generate multimodal responses with consistent emotions across all modalities. In this work, we propose a conceptual framework, ACTOR (Affect-Consistent mulTimodal behaviOR generation), that aims to increase the perception of affects by generating multimodal behaviors conditioned on a consistent driving affect. We have conducted a user study with 199 participants to assess how the average person judges the affects perceived from multimodal behaviors that are consistent and inconsistent with respect to a driving affect. The result shows that among all model conditions, our affect-consistent framework receives the highest Likert scores for the perception of driving affects. Our statistical analysis suggests that making a modality affect-inconsistent significantly decreases the perception of driving affects. We also observe that multimodal behaviors conditioned on consistent affects are more expressive compared to behaviors with inconsistent affects. Therefore, we conclude that multimodal emotion conditioning and affect consistency are vital to enhancing the perception of affects for embodied conversational agents
    • …
    corecore