611 research outputs found

    The Douglas-Peucker algorithm for line simplification: Re-evaluation through visualization

    Get PDF
    The primary aim of this paper is to illustrate the value of visualization in cartography and to indicate that tools for the generation and manipulation of realistic images are of limited value within this application. This paper demonstrates the value of visualization within one problem in cartography, namely the generalisation of lines. It reports on the evaluation of the Douglas-Peucker algorithm for line simplification. Visualization of the simplification process and of the results suggest that the mathematical measures of performance proposed by some other researchers are inappropriate, misleading and questionable

    A Programmable Display-Layer Architecture for Virtual-Reality Applications

    Get PDF
    Two important technical objectives of virtual-reality systems are to provide compelling visuals and effective 3D user interaction. In this respect, modern virtual reality system architectures suffer from a number of short-comings. The reduction of end-to-end latency, crosstalk and judder are especially difficult challenges, each of which negatively affects visual quality or user interaction. In order to provide higher quality visuals, complex scenes consisting of large models are often used. Rendering such a complex scene is a time-consuming process resulting in high end-to-end latency, thereby hampering user interaction. Classic virtual-reality architectures can not adequately address these challenges due to their inherent design principles. In particular, the tight coupling between input devices, the rendering loop and the display system inhibits these systems from addressing all the aforementioned challenges simultaneously. In this thesis, a virtual-reality architecture design is introduced that is based on the addition of a new logical layer: the Programmable Display Layer (PDL). The governing idea is that an extra layer is inserted between the rendering system and the display. In this way, the display can be updated at a fast rate and in a custom manner independent of the other components in the architecture, including the rendering system. To generate intermediate display updates at a fast rate, the PDL performs per-pixel depth-image warping by utilizing the application data. Image warping is the process of computing a new image by transforming individual depth-pixels from a closely matching previous image to their updated locations. The PDL architecture can be used for a range of algorithms and to solve problems that are not easily solved using classic architectures. In particular, techniques to reduce crosstalk, judder and latency are examined using algorithms implemented on top of the PDL. Concerning user interaction techniques, several six-degrees-of-freedom input methods exists, of which optical tracking is a popular option. However, optical tracking methods also introduce several constraints that depend on the camera setup, such as line-of-sight requirements, the volume of the interaction space and the achieved tracking accuracy. These constraints generally cause a decline in the effectiveness of user interaction. To investigate the effectiveness of optical tracking methods, an optical tracker simulation framework has been developed, including a novel optical tracker to test this framework. In this way, different optical tracking algorithms can be simulated and quantitatively evaluated under a wide range of conditions. A common approach in virtual reality is to implement an algorithm and then to evaluate the efficacy of that algorithm by either subjective, qualitative metrics or quantitative user experiments, after which an updated version of the algorithm may be implemented and the cycle repeated. A different approach is followed here. Throughout this thesis, an attempt is made to automatically detect and quantify errors using completely objective and automated quantitative methods and to subsequently attempt to resolve these errors dynamically

    Human sound localisation cues and their relation to morphology

    Get PDF
    Binaural soundfield reproduction has the potential to create realistic threedimensional sound scenes using only a pair of normal headphones. Possible applications for binaural audio abound in, for example, the music, mobile communications and games industries. A problem exists, however, in that the head-related transfer functions (HRTFs) which inform our spatial perception of sound are affected by variations in human morphology, particularly in the shape of the external ear. It has been observed that HRTFs simply based on some kind of average head shape generally result in poor elevation perception, weak externalisation and spectrally distorted sound images. Hence, HRTFs are needed which accommodate these individual differences. Direct acoustic measurement and acoustic simulations based on morphological measurements are obvious means of obtaining individualised HRTFs, but both methods suffer from high cost and practical difficulties. The lack of a viable measurement method is currently hindering the widespread adoption of binaural technologies. There have been many attempts to estimate individualised HTRFs effectively and cheaply using easily obtainable morphological descriptors, but due to an inadequate understanding of the complex acoustic effects created in particular by the external ear, success has been limited. The work presented in this thesis strengthens current understanding in several ways and provides a promising route towards improved HRTF estimation. The way HRTFs vary as a function of direction is compared with localisation acuity to help pinpoint spectral features which contribute to spatial perception. 50 subjects have been scanned using magnetic resonance imaging to capture their head and pinna morphologies, and HRTFs for the same group have been measured acoustically. To make analysis of this extensive data tractable, and so reveal the mapping between the morphological and acoustic domains, a parametric method for efficiently describing head morphology has been developed. Finally, a novel technique, referred to as morphoacoustic perturbation analysis (MPA), is described. We demonstrate how MPA allows the morphological origin of a variety of HRTF spectral features to be identified

    Real-Time Global Illumination for VR Applications

    Full text link
    Real-time global illumination in VR systems enhances scene realism by incorporating soft shadows, reflections of objects in the scene, and color bleeding. The Virtual Light Field (VLF) method enables real-time global illumination rendering in VR. The VLF has been integrated with the Extreme VR system for realtime GPU-based rendering in a Cave Automatic Virtual Environment

    Perceptually Modulated Level of Detail for Virtual Environments

    Get PDF
    Institute for Computing Systems ArchitectureThis thesis presents a generic and principled solution for optimising the visual complexity of any arbitrary computer-generated virtual environment (VE). This is performed with the ultimate goal of reducing the inherent latencies of current virtual reality (VR) technology. Effectively, we wish to remove extraneous detail from an environment which the user cannot perceive, and thus modulate the graphical complexity of a VE with little or no perceptual artifacts. The work proceeds by investigating contemporary models and theories of visual perception and then applying these to the field of real-time computer graphics. Subsequently, a technique is devised to assess the perceptual content of a computer-generated image in terms of spatial frequency (c/deg), and a model of contrast sensitivity is formulated to describe a user's ability to perceive detail under various conditions in terms of this metric. This allows us to base the level of detail (LOD) of each object in a VE on a measure of the degree of spatial detail which the user can perceive at any instant (taking into consideration the size of an object, its angular velocity, and the degree to which it exists in the peripheral field). Additionally, a generic polygon simplification framework is presented to complement the use of perceptually modulated LOD. The efficient implementation of this perceptual model is discussed and a prototype system is evaluated through a suite of experiments. These include a number of low-level psychophysical studies (to evaluate the accuracy of the model), a task performance study (to evaluate the effects of the model on the user), and an analysis of system performance gain (to evaluate the effects of the model on the system). The results show that for the test application chosen, the frame rate of the simulation was manifestly improved (by four to five-fold) with no perceivable drop in image fidelity. As a result, users were able to perform the given wayfinding task more proficiently and rapidly. Finally, conclusions are drawn on the application and utility of perceptually-based optimisations; both in reference to this work, and in the wider context

    Information theory tools for viewpoint selection, mesh saliency and geometry simplification

    Get PDF
    In this chapter we review the use of an information channel as a unified framework for viewpoint selection, mesh saliency and geometry simplification. Taking the viewpoint distribution as input and object mesh polygons as output vectors, the channel is given by the projected areas of the polygons over the different viewpoints. From this channel, viewpoint entropy and viewpoint mutual information can be defined in a natural way. Reversing this channel, polygonal mutual information is obtained, which is interpreted as an ambient occlusion-like quantity, and from the variation of this polygonal mutual information mesh saliency is defined. Viewpoint entropy, viewpoint Kullback-Leibler distance, and viewpoint mutual information are then applied to mesh simplification, and shown to compare well with a classical geometrical simplification method
    corecore