432 research outputs found

    Velocity-Based LOD Reduction in Virtual Reality: A Psychometric Approach

    Full text link
    Virtual Reality headsets enable users to explore the environment by performing self-induced movements. The retinal velocity produced by such motion reduces the visual system's ability to resolve fine detail. We measured the impact of self-induced head rotations on the ability to detect quality changes of a realistic 3D model in an immersive virtual reality environment. We varied the Level-of-Detail (LOD) as a function of rotational head velocity with different degrees of severity. Using a psychophysical method, we asked 17 participants to identify which of the two presented intervals contained the higher quality model under two different maximum velocity conditions. After fitting psychometric functions to data relating the percentage of correct responses to the aggressiveness of LOD manipulations, we identified the threshold severity for which participants could reliably (75\%) detect the lower LOD model. Participants accepted an approximately four-fold LOD reduction even in the low maximum velocity condition without a significant impact on perceived quality, which suggests that there is considerable potential for optimisation when users are moving (increased range of perceptual uncertainty). Moreover, LOD could be degraded significantly more in the maximum head velocity condition, suggesting these effects are indeed speed dependent

    Latency Aware Foveated Rendering in Unreal Engine 4

    Get PDF

    Hybrid image-/model-based gaze-contingent rendering

    Full text link

    Perceptually Modulated Level of Detail for Virtual Environments

    Get PDF
    Institute for Computing Systems ArchitectureThis thesis presents a generic and principled solution for optimising the visual complexity of any arbitrary computer-generated virtual environment (VE). This is performed with the ultimate goal of reducing the inherent latencies of current virtual reality (VR) technology. Effectively, we wish to remove extraneous detail from an environment which the user cannot perceive, and thus modulate the graphical complexity of a VE with little or no perceptual artifacts. The work proceeds by investigating contemporary models and theories of visual perception and then applying these to the field of real-time computer graphics. Subsequently, a technique is devised to assess the perceptual content of a computer-generated image in terms of spatial frequency (c/deg), and a model of contrast sensitivity is formulated to describe a user's ability to perceive detail under various conditions in terms of this metric. This allows us to base the level of detail (LOD) of each object in a VE on a measure of the degree of spatial detail which the user can perceive at any instant (taking into consideration the size of an object, its angular velocity, and the degree to which it exists in the peripheral field). Additionally, a generic polygon simplification framework is presented to complement the use of perceptually modulated LOD. The efficient implementation of this perceptual model is discussed and a prototype system is evaluated through a suite of experiments. These include a number of low-level psychophysical studies (to evaluate the accuracy of the model), a task performance study (to evaluate the effects of the model on the user), and an analysis of system performance gain (to evaluate the effects of the model on the system). The results show that for the test application chosen, the frame rate of the simulation was manifestly improved (by four to five-fold) with no perceivable drop in image fidelity. As a result, users were able to perform the given wayfinding task more proficiently and rapidly. Finally, conclusions are drawn on the application and utility of perceptually-based optimisations; both in reference to this work, and in the wider context

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Visual Perception in Simulated Reality

    Get PDF

    BlickpunktabhÀngige Computergraphik

    Get PDF
    Contemporary digital displays feature multi-million pixels at ever-increasing refresh rates. Reality, on the other hand, provides us with a view of the world that is continuous in space and time. The discrepancy between viewing the physical world and its sampled depiction on digital displays gives rise to perceptual quality degradations. By measuring or estimating where we look, gaze-contingent algorithms aim at exploiting the way we visually perceive to remedy visible artifacts. This dissertation presents a variety of novel gaze-contingent algorithms and respective perceptual studies. Chapter 4 and 5 present methods to boost perceived visual quality of conventional video footage when viewed on commodity monitors or projectors. In Chapter 6 a novel head-mounted display with real-time gaze tracking is described. The device enables a large variety of applications in the context of Virtual Reality and Augmented Reality. Using the gaze-tracking VR headset, a novel gaze-contingent render method is described in Chapter 7. The gaze-aware approach greatly reduces computational efforts for shading virtual worlds. The described methods and studies show that gaze-contingent algorithms are able to improve the quality of displayed images and videos or reduce the computational effort for image generation, while display quality perceived by the user does not change.Moderne digitale Bildschirme ermöglichen immer höhere Auflösungen bei ebenfalls steigenden Bildwiederholraten. Die RealitĂ€t hingegen ist in Raum und Zeit kontinuierlich. Diese Grundverschiedenheit fĂŒhrt beim Betrachter zu perzeptuellen Unterschieden. Die Verfolgung der Aug-Blickrichtung ermöglicht blickpunktabhĂ€ngige Darstellungsmethoden, die sichtbare Artefakte verhindern können. Diese Dissertation trĂ€gt zu vier Bereichen blickpunktabhĂ€ngiger und wahrnehmungstreuer Darstellungsmethoden bei. Die Verfahren in Kapitel 4 und 5 haben zum Ziel, die wahrgenommene visuelle QualitĂ€t von Videos fĂŒr den Betrachter zu erhöhen, wobei die Videos auf gewöhnlicher Ausgabehardware wie z.B. einem Fernseher oder Projektor dargestellt werden. Kapitel 6 beschreibt die Entwicklung eines neuartigen Head-mounted Displays mit UnterstĂŒtzung zur Erfassung der Blickrichtung in Echtzeit. Die Kombination der Funktionen ermöglicht eine Reihe interessanter Anwendungen in Bezug auf Virtuelle RealitĂ€t (VR) und Erweiterte RealitĂ€t (AR). Das vierte und abschließende Verfahren in Kapitel 7 dieser Dissertation beschreibt einen neuen Algorithmus, der das entwickelte Eye-Tracking Head-mounted Display zum blickpunktabhĂ€ngigen Rendern nutzt. Die QualitĂ€t des Shadings wird hierbei auf Basis eines Wahrnehmungsmodells fĂŒr jeden Bildpixel in Echtzeit analysiert und angepasst. Das Verfahren hat das Potenzial den Berechnungsaufwand fĂŒr das Shading einer virtuellen Szene auf ein Bruchteil zu reduzieren. Die in dieser Dissertation beschriebenen Verfahren und Untersuchungen zeigen, dass blickpunktabhĂ€ngige Algorithmen die DarstellungsqualitĂ€t von Bildern und Videos wirksam verbessern können, beziehungsweise sich bei gleichbleibender BildqualitĂ€t der Berechnungsaufwand des bildgebenden Verfahrens erheblich verringern lĂ€sst
    • 

    corecore