3,928 research outputs found

    Selective rendering for efficient ray traced stereoscopic images

    Get PDF
    Depth-related visual effects are a key feature of many virtual environments. In stereo-based systems, the depth effect can be produced by delivering frames of disparate image pairs, while in monocular environments, the viewer has to extract this depth information from a single image by examining details such as perspective and shadows. This paper investigates via a number of psychophysical experiments, whether we can reduce computational effort and still achieve perceptually high-quality rendering for stereo imagery. We examined selectively rendering the image pairs by exploiting the fusing capability and depth perception underlying human stereo vision. In ray-tracing-based global illumination systems, a higher image resolution introduces more computation to the rendering process since many more rays need to be traced. We first investigated whether we could utilise the human binocular fusing ability and significantly reduce the resolution of one of the image pairs and yet retain a high perceptual quality under stereo viewing condition. Secondly, we evaluated subjects' performance on a specific visual task that required accurate depth perception. We found that subjects required far fewer rendered depth cues in the stereo viewing environment to perform the task well. Avoiding rendering these detailed cues saved significant computational time. In fact it was possible to achieve a better task performance in the stereo viewing condition at a combined rendering time for the image pairs less than that required for the single monocular image. The outcome of this study suggests that we can produce more efficient stereo images for depth-related visual tasks by selective rendering and exploiting inherent features of human stereo vision

    Interactive global illumination on the CPU

    Get PDF
    Computing realistic physically-based global illumination in real-time remains one of the major goals in the fields of rendering and visualisation; one that has not yet been achieved due to its inherent computational complexity. This thesis focuses on CPU-based interactive global illumination approaches with an aim to develop generalisable hardware-agnostic algorithms. Interactive ray tracing is reliant on spatial and cache coherency to achieve interactive rates which conflicts with needs of global illumination solutions which require a large number of incoherent secondary rays to be computed. Methods that reduce the total number of rays that need to be processed, such as Selective rendering, were investigated to determine how best they can be utilised. The impact that selective rendering has on interactive ray tracing was analysed and quantified and two novel global illumination algorithms were developed, with the structured methodology used presented as a framework. Adaptive Inter- leaved Sampling, is a generalisable approach that combines interleaved sampling with an adaptive approach, which uses efficient component-specific adaptive guidance methods to drive the computation. Results of up to 11 frames per second were demonstrated for multiple components including participating media. Temporal Instant Caching, is a caching scheme for accelerating the computation of diffuse interreflections to interactive rates. This approach achieved frame rates exceeding 9 frames per second for the majority of scenes. Validation of the results for both approaches showed little perceptual difference when comparing against a gold-standard path-traced image. Further research into caching led to the development of a new wait-free data access control mechanism for sharing the irradiance cache among multiple rendering threads on a shared memory parallel system. By not serialising accesses to the shared data structure the irradiance values were shared among all the threads without any overhead or contention, when reading and writing simultaneously. This new approach achieved efficiencies between 77% and 92% for 8 threads when calculating static images and animations. This work demonstrates that, due to the flexibility of the CPU, CPU-based algorithms remain a valid and competitive choice for achieving global illumination interactively, and an alternative to the generally brute-force GPU-centric algorithms

    A hierarchical approach for modelling X-ray beamlines. Application to a coherent beamline

    Full text link
    We consider different approaches to simulate a modern X-ray beamline. Several methodologies with increasing complexity are applied to discuss the relevant parameters that quantify the beamline performance. Parameters such as flux, dimensions and intensity distribution of the focused beam and coherence properties are obtained from simple analytical calculations to sophisticated computer simulations using ray-tracing and wave optics techniques. A latest-generation X-ray nanofocusing beamline for coherent applications (ID16A at the ESRF) has been chosen to study in detail the issues related to highly demagnifying synchrotron sources and exploiting the beam coherence. The performance of the beamline is studied for two storage rings: the old ESRF-1 (emittance 4000~pm) and the new ESRF-EBS (emittance 150~pm). In addition to traditional results in terms of flux and beam sizes, an innovative study on the partial coherence properties based on the propagation of coherent modes is presented. The different algorithms and methodologies are implemented in the software suite OASYS. Those are discussed with emphasis placed upon the their benefits and limitations of each

    Instant global illumination on the GPU using OptiX

    Get PDF
    OptiX, a programmable ray tracing engine, has been recently made available by NVidia, relieving rendering researchers from the idiosyncrasies of efficient ray tracing programming and allowing them to concentrate on higher level algorithms, such as interactive global illumination. This paper evaluates the performance of the Instant Global Illumination algorithm on OptiX as well as the impact of three di fferent optimization techniques: imperfect visibility, downsampling and interleaved sampling. Results show that interactive frame rates are indeed achievable, although the combination of all optimization techniques leads to the appearance of artifacts that compromise image quality. Suggestions are presented on possible ways to overcome these limitations

    Parallel interactive ray tracing and exploiting spatial coherence

    Get PDF
    Dissertação de mestrado em Engenharia de InformáticaRay tracing is a rendering technique that allows simulating a wide range of light transport phenomena, resulting on highly realistic computer generated imaging. Ray tracing is, however, computationally very demanding, compared to other techniques such as rasterization that achieves shorter rendering times by greatly simplifying the physics of light propagation, at the cost of less realistic images. The complexity of the ray tracing algorithm makes it unusable for interactive applications on machines without dedicated hardware, such as GPUs. The extreme task independent nature of the algorithm offers great potential for parallel processing, increasing the available computational power by using additional resources. This thesis studies different approaches and enhancements on the decomposition of workload and load balancing in a distributed shared memory cluster in order to achieve interactive frame rates. This thesis also studies approaches to enhance the ray tracing algorithm, by reducing the computational demand without decreasing the quality of the results. To achieve this goal, optimizations that depend on the rays’ processing order were implemented. An alternative to the traditional image plan traversal order, scan line, is studied, using space-filling curves. Results have shown linear speed-ups of the used ray tracer in a distributed shared memory cluster. They have also shown that spatial coherence can be used to increase the performance of the ray tracing algorithm and that the improvement depends of the traversal order of the image plane.O ray tracing é uma técnica de síntese de imagens que permite simular um vasto conjunto de fenómenos da luz, resultando em imagens geradas por computador altamente realistas. O ray tracing é, no entanto, computacionalmente muito exigente quando comparado com outras técnicas tais como a rasterização, a qual consegue tempos de síntese mais baixos mas com imagens menos realistas. A complexidade do algoritmo de ray tracing torna o seu uso impossível para aplicações interativas em máquinas que não disponham de hardware dedicado a esse tipo de processamento, como os GPUs. No entanto, a natureza extremamente paralela do algoritmo oferece um grande potencial para o processamento paralelo. Nesta tese são analisadas diferentes abordagens e optimizações da decomposição das tarefas e balanceamento da carga num cluster de memória distribuída, por forma a alcançar frame rates interativas. Esta tese também estuda abordagens que melhoram o algoritmo de ray tracing, ao reduzir o esforço computacional sem perder qualidade nos resultados. Para esse efeito, foram implementadas optimizações que dependem da ordem pela qual os raios são processados. Foi estudada, nomeadamente, uma travessia do plano da imagem alternativa à tradicional, scan line, usando curvas de preenchimento espacial. Os resultados obtidos mostraram aumento de desempenho linear do ray tracer utilizado num cluster de memória distribuída. Demonstraram também que a coerência espacial pode ser usada para melhorar o desempenho do algoritmo de ray tracing e que estas melhorias dependem do algoritmo de travessia utilizado

    Mapping Atomic Motions with Electrons: Toward the Quantum Limit to Imaging Chemistry

    Get PDF
    Recent advances in ultrafast electron and X-ray diffraction have pushed imaging of structural dynamics into the femtosecond time domain, that is, the fundamental time scale of atomic motion. New physics can be reached beyond the scope of traditional diffraction or reciprocal space imaging. By exploiting the high time resolution, it has been possible to directly observe the collapse of nearly innumerable possible nuclear motions to a few key reaction modes that direct chemistry. It is this reduction in dimensionality in the transition state region that makes chemistry a transferable concept, with the same class of reactions being applicable to synthetic strategies to nearly arbitrary levels of complexity. The ability to image the underlying key reaction modes has been achieved with resolution to relative changes in atomic positions to better than 0.01 Å, that is, comparable to thermal motions. We have effectively reached the fundamental space-time limit with respect to the reaction energetics and imaging the acting forces. In the process of ensemble measured structural changes, we have missed the quantum aspects of chemistry. This perspective reviews the current state of the art in imaging chemistry in action and poses the challenge to access quantum information on the dynamics. There is the possibility with the present ultrabright electron and X-ray sources, at least in principle, to do tomographic reconstruction of quantum states in the form of a Wigner function and density matrix for the vibrational, rotational, and electronic degrees of freedom. Accessing this quantum information constitutes the ultimate demand on the spatial and temporal resolution of reciprocal space imaging of chemistry. Given the much shorter wavelength and corresponding intrinsically higher spatial resolution of current electron sources over X-rays, this Perspective will focus on electrons to provide an overview of the challenge on both the theory and the experimental fronts to extract the quantum aspects of molecular dynamics
    corecore