1,990 research outputs found

    Assessing the perceived realism of agent crowd behaviour within virtual urban environments using psychophysics

    Get PDF
    Inhabited virtual environments feature in a growing number of graphical applications. Simulated crowds are employed for different purposes; ranging from evaluation of evacuation procedures to driving interactable elements in video games. For many applications, it is important that the displayed crowd behaviour is perceptually plausible to the intended viewers. Crowd behaviour is inherently in flux, often depending upon many different variables such as location, situation and crowd composition. Researchers have, for a long time, attempted to understand and reason about crowd behaviour, going back as far as famous psychologists such as Gustave Le Bon and Sigmund Freud who applied theories of mob psychology with varying results. Since then, various other methods have been tried, from articial intelligence to simple heuristics, for crowd simulation. Even though the research into methods for simulating crowds has a long history, evaluating such simulations has received less attention and, as this thesis will show, increased complexity and high-delity recreation of recorded behaviours does not guarantee improvement in the plausibility for a human observer. Actual crowd data is not always perceived more real than simulation, making it dicult to identify gold standards, or a ground truth. This thesis presents new work on the use of psychophysics for perceptual evaluation of crowd simulation in order to develop methods and metrics for tailoring crowd behaviour for target applications. Psychophysics itself is branch of psychology dedicated to studying the relationship between a given stimuli and how it is perceived. A three-stage methodology of analysis, synthesis and perception is employed in which crowd data is gathered from the analysis of real instances of crowd behaviour and then used to synthesise behavioural features for simulation before being perceptually evaluated using psychophysics. Perceptual thresholds are calculated based on the psychometric function and key congurations are identied that appear the most perceptually plausible to human viewers. The method is shown to be useful for the initial application and it is expected that it will be applicable to a wide range of simulation problems in which human perception and acceptance is the ultimate measure of success

    The Plausibility of a String Quartet Performance in Virtual Reality

    Get PDF
    We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. ‘Plausibility’ refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participant’s movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility

    Interaction between auditory and visual perceptions on distance estimations in a virtual environment

    No full text
    International audienceNavigation in virtual environments relies on an accurate spatial rendering. A virtual object is localized according to its position in the environment, which is usually defined by the following three coordinates: azimuth, elevation and distance. Even though several studies investigated the perception of auditory and visual cues in azimuth and elevation, little has been made on the distance dimension. This study aims at investigating the way humans estimate visual and auditory egocentric distances of virtual objects. Subjects were asked to estimate the egocentric distance of 2–20 m distant objects in three contexts: auditory perception alone, visual one alone, combination of both perceptions (with coherent and incoherent visual and auditory cues). Even though egocentric distance was under-estimated in all contexts, the results showed a higher influence of visual information than auditory information on the perceived distance. Specifically, the bimodal incoherent condition gave perceived distances equivalent to those in the visual-only condition only when the visual target was closer to the subject than the auditory target

    Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    Get PDF
    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.The research was supported by MRC grant G0701870 and the Vision and Eye Research Unit (VERU), Postgraduate Medical Institute at Anglia Ruskin University.This is the final version of the article. It first appeared from Springer via http://dx.doi.org/10.3758/s13414-015-1015-

    Using Behavioral Realism to Estimate Presence: A Study of the Utility of Postural Responses to Motion Stimuli

    Get PDF
    We recently reported that direct subjective ratings of the sense of presence are potentially unstable and can be biased by previous judgments of the same stimuli (Freeman et al., 1999). Objective measures of the behavioral realism elicited by a display offer an alternative to subjective ratings. Behavioral measures and presence are linked by the premise that, when observers experience a mediated environment (VE or broadcast) that makes them feel present, they will respond to stimuli within the environment as they would to stimuli in the real world. The experiment presented here measured postural responses to a video sequence filmed from the hood of a car traversing a rally track, using stereoscopic and monoscopic presentation. Results demonstrated a positive effect of stereoscopic presentation on the magnitude of postural responses elicited. Posttest subjective ratings of presence, vection, and involvement were also higher for stereoscopically presented stimuli. The postural and subjective measures were not significantly correlated, indicating that nonproprioceptive postural responses are unlikely to provide accurate estimates of presence. Such postural responses may prove useful for the evaluation of displays for specific applications and in the corroboration of group subjective ratings of presence, but cannot be taken in place of subjective ratings

    Perceiving virtual geographic slant: action influences perception

    Get PDF
    technical reportFour experiments varied the extent and nature of observer movement in a virtual environment to examine the influence of action on estimates of geographical slant. Previous slant studies demonstrated that people consciously overestimate hill slant but can still accurately guide an action toward the hill (Proffitt, Bhalla, Gossweiler & Midget, 1995). Related studies (Bhalla & Proffitt, 1999) suggest that one s potential to act may influence perception of slant and that distinct representations may independently inform perceptual and motoric responses. We found that in all conditions, perceptual judgments were overestimated and motoric adjustments were more accurate. The virtual environment allowed manipulation of the effort required to walk up simulated hills. Walking with the effort appropriate to the visual slant led to increased perceptual overestimation of slant compared to active walking with effort appropriate to level ground, while visually guided actions remained accurate

    Audio-visual-olfactory resource allocation for tri-modal virtual environments

    Get PDF
    © 2019 IEEE. Virtual Environments (VEs) provide the opportunity to simulate a wide range of applications, from training to entertainment, in a safe and controlled manner. For applications which require realistic representations of real world environments, the VEs need to provide multiple, physically accurate sensory stimuli. However, simulating all the senses that comprise the human sensory system (HSS) is a task that requires significant computational resources. Since it is intractable to deliver all senses at the highest quality, we propose a resource distribution scheme in order to achieve an optimal perceptual experience within the given computational budgets. This paper investigates resource balancing for multi-modal scenarios composed of aural, visual and olfactory stimuli. Three experimental studies were conducted. The first experiment identified perceptual boundaries for olfactory computation. In the second experiment, participants (N=25) were asked, across a fixed number of budgets (M=5), to identify what they perceived to be the best visual, acoustic and olfactory stimulus quality for a given computational budget. Results demonstrate that participants tend to prioritize visual quality compared to other sensory stimuli. However, as the budget size is increased, users prefer a balanced distribution of resources with an increased preference for having smell impulses in the VE. Based on the collected data, a quality prediction model is proposed and its accuracy is validated against previously unused budgets and an untested scenario in a third and final experiment

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A Psychophysical Experiment Regarding Components of the Plausibility Illusion

    Get PDF
    We report on the design and results of an experiment investigating factors influencing Slater’s Plausibility Illusion (Psi) in virtual environments. Slater proposed Psi and Place Illusion (PI) as orthogonal components of virtual experience which contribute to realistic response in a VE. PI corresponds to the traditional conception of presence as “being there,” so there exists a substantial body of previous research relating to PI, but very little relating to Psi. We developed this experiment to investigate the components of plausibility illusion using subjective matching techniques similar to those used in color science. Twenty-one participants each experienced a scenario with the highest level of coherence (the extent to which a scenario matches user expectations and is internally consistent), then in eight different trials chose transitions from lower-coherence to higher-coherence scenarios with the goal of matching the level of Psi they felt in the highest-coherence scenario. At each transition, participants could change one of the following coherence characteristics: the behavior of the other virtual humans in the environment, the behavior of their own body, the physical behavior of objects, or the appearance of the environment. Participants tended to choose improvements to the virtual body before any other improvements. This indicates that having an accurate and well-behaved representation of oneself in the virtual environment is the most importa
    • 

    corecore