390 research outputs found

    A perceptual approach for stereoscopic rendering optimization

    Get PDF
    Cataloged from PDF version of article.The traditional way of stereoscopic rendering requires rendering the scene for left and right eyes separately: which doubles the rendering complexity. In this study, we propose a perceptually-based approach for accelerating stereoscopic rendering. This optimization approach is based on the Binocular Suppression Theory, which claims that the overall percept of a stereo pair in a region is determined by the dominant image on the corresponding region. We investigate how binocular suppression mechanism of human visual system can be utilized for rendering optimization. Our aim is to identify the graphics rendering and modeling features that do not affect the overall quality of a stereo pair when simplified in one view. By combining the results of this investigation with the principles of visual attention, we infer that this optimization approach is feasible if the high quality view has more intensity contrast. For this reason, we performed a subjective experiment, in which various representative graphical methods were analyzed. The experimental results verified our hypothesis that a modification, applied on a single view, is not perceptible if it decreases the intensity contrast, and thus can be used for stereoscopic rendering. (C) 2009 Elsevier Ltd. All rights reserved

    Visual Importance-Biased Image Synthesis Animation

    Get PDF
    Present ray tracing algorithms are computationally intensive, requiring hours of computing time for complex scenes. Our previous work has dealt with the development of an overall approach to the application of visual attention to progressive and adaptive ray-tracing techniques. The approach facilitates large computational savings by modulating the supersampling rates in an image by the visual importance of the region being rendered. This paper extends the approach by incorporating temporal changes into the models and techniques developed, as it is expected that further efficiency savings can be reaped for animated scenes. Applications for this approach include entertainment, visualisation and simulation

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Doctor of Philosophy

    Get PDF
    dissertationI present a new migration algorithm denoted as generalized di raction-stack migration (GDM). Unlike traditional di raction-stack migration, it accounts for all arrivals in the wave eld, including two-way primaries and multiple arrivals, and it is not subject to the high-frequency approximation of ray tracing. It is as accurate as reverse-time migration (RTM), but, unlike RTM, ltering and muting can be easily applied to the migration operator to reduce artifacts due to aliasing, and unwanted events such as multiples. Unlike RTM, GDM can be applied to common o set gathers. The main drawback of GDM is that it can be more than an order-of-magnitude more computationally expensive than RTM, and requires much more memory for e cient use. To mitigate some of these disadvantages, I present a multisource least-squares GDM method with phase-encoding. There are six chapters presented after the introduction. Chapter 2 derives the GDM equation by reformulating the standard RTM equation, and shows how GDM is related to the traditional di raction-stack migration. Chapter 3 shows how the GDM kernel can be ltered to eliminate coherent noise in the migration image. This precise ltering of the migration operator cannot be done with the standard RTM approach, but it can now be performed with the GDM method. In Chapter 4, I develop an antialiasing lter for GDM. This idea is adapted from the traditional antialiasing strategy for Kirchho migration, except GDM antialiasing accounts for both primary and multiple re ection events. This is novel antialiasing lter that can be used for ltering the RTM-like imaging operator. In Chapter 5, I show how to mute or lter the GDM operator to emphasize multiple re ection events. I split the GDM operator into two separate parts, the primary migration operator and the multiple migration operator. By computing the dot-product of the migration operators with the data, followed by an optimal stack of the primary-only image and the multiple-only image, a higher resolution in the migration image can be achieved. An additional bene t is that cross-talk between primary and multiple scattered arrivals, often seen in conventional RTM images, are greatly attenuated. Finally, Chapter 6 presents an e cient implementation of least-squares GDM with supergathers. The supergather consists of a blend of many encoded shot gathers, each one with a unique encoding function that mitigates crosstalk in the migration image. A unique feature of GDM is that the Green's functions (computed by a nite-di erence solution to the wave equation) can be reused at each iteration. Unlike conventional least-squares RTM, no new nite-di erence simulations are needed to get the updated migration image. This can result in almost two orders-of-magnitude reduction in cost for iterative least-squares migration. Furthermore, when the least-squares GDM is combined with phase-encoded multisource technology, the cost savings are even greater. This is a subject that is discussed in Chapter 7. The main challenge with GDM is that it demands much more memory and I/O cost than standard RTM algorithm. As a partial remedy, Appendix A describes how to e ciently compute the migration operators either in a target-oriented mode or by using wave equation wavefront modeling. In addition, the intensive I/O and storage costs can be partly, not fully, mitigated by applying a wavelet transform with compression, where a compression ratio of at least an order-of-magnitude can be achieved with a small loss of accuracy. This topic is addressed in Appendix B

    Decoupled Sampling for Real-Time Graphics Pipelines

    Get PDF
    We propose decoupled sampling, an approach that decouples shading from visibility sampling in order to enable motion blur and depth-of-field at reduced cost. More generally, it enables extensions of modern real-time graphics pipelines that provide controllable shading rates to trade off quality for performance. It can be thought of as a generalization of GPU-style multisample antialiasing (MSAA) to support unpredictable shading rates, with arbitrary mappings from visibility to shading samples as introduced by motion blur, depth-of-field, and adaptive shading. It is inspired by the Reyes architecture in offline rendering, but targets real-time pipelines by driving shading from visibility samples as in GPUs, and removes the need for micropolygon dicing or rasterization. Decoupled Sampling works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. We present extensions of two modern GPU pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion blur and depth-of-field, as well as variable and adaptive shading rates

    Acceleration Techniques for Photo Realistic Computer Generated Integral Images

    Get PDF
    The research work presented in this thesis has approached the task of accelerating the generation of photo-realistic integral images produced by integral ray tracing. Ray tracing algorithm is a computationally exhaustive algorithm, which spawns one ray or more through each pixel of the pixels forming the image, into the space containing the scene. Ray tracing integral images consumes more processing time than normal images. The unique characteristics of the 3D integral camera model has been analysed and it has been shown that different coherency aspects than normal ray tracing can be investigated in order to accelerate the generation of photo-realistic integral images. The image-space coherence has been analysed describing the relation between rays and projected shadows in the scene rendered. Shadow cache algorithm has been adapted in order to minimise shadow intersection tests in integral ray tracing. Shadow intersection tests make the majority of the intersection tests in ray tracing. Novel pixel-tracing styles are developed uniquely for integral ray tracing to improve the image-space coherence and the performance of the shadow cache algorithm. Acceleration of the photo-realistic integral images generation using the image-space coherence information between shadows and rays in integral ray tracing has been achieved with up to 41 % of time saving. Also, it has been proven that applying the new styles of pixel-tracing does not affect of the scalability of integral ray tracing running over parallel computers. The novel integral reprojection algorithm has been developed uniquely through geometrical analysis of the generation of integral image in order to use the tempo-spatial coherence information within the integral frames. A new derivation of integral projection matrix for projecting points through an axial model of a lenticular lens has been established. Rapid generation of 3D photo-realistic integral frames has been achieved with a speed four times faster than the normal generation

    Evaluation of optimisation techniques for multiscopic rendering

    Get PDF
    A thesis submitted to the University of Bedfordshire in fulfilment of the requirements for the degree of Master of Science by ResearchThis project evaluates different performance optimisation techniques applied to stereoscopic and multiscopic rendering for interactive applications. The artefact features a robust plug-in package for the Unity game engine. The thesis provides background information for the performance optimisations, outlines all the findings, evaluates the optimisations and provides suggestions for future work. Scrum development methodology is used to develop the artefact and quantitative research methodology is used to evaluate the findings by measuring performance. This project concludes that the use of each performance optimisation has specific use case scenarios in which performance benefits. Foveated rendering provides greatest performance increase for both stereoscopic and multiscopic rendering but is also more computationally intensive as it requires an eye tracking solution. Dynamic resolution is very beneficial when overall frame rate smoothness is needed and frame drops are present. Depth optimisation is beneficial for vast open environments but can lead to decreased performance if used inappropriately

    In vivo visualization and analysis of 3-D hemodynamics in cerebral aneurysms with flow-sensitized 4-D MR imaging at 3T

    Get PDF
    Introduction: Blood-flow patterns and wall shear stress (WSS) are considered to play a major role in the development and rupture of cerebral aneurysms. These hemodynamic aspects have been extensively studied in vitro using geometric realistic aneurysm models. The purpose of this study was to evaluate the feasibility of in vivo flow-sensitized 4-D MR imaging for analysis of intraaneurysmal hemodynamics. Methods: Five cerebral aneurysms were examined using ECG-gated, flow-sensitized 4-D MR imaging at 3T in three patients. Postprocessing included quantification of flow velocities, visualization of time-resolved 2-D vector graphs and 3-D particle traces, vortical flow analysis, and estimation of WSS. Flow patterns were analyzed in relation to aneurysm geometry and aspect ratio. Results: Magnitude, spatial and temporal evolution of vortical flow differed markedly among the aneurysms. Particularly unstable vortical flow was demonstrated in a wide-necked parophthalmic ICA aneurysm (high aspect ratio). Relatively stable vortical flow was observed in aneurysms with a lower aspect ratio. Except for a wide-necked cavernous ICA aneurysm (low aspect ratio), WSS was reduced in all aneurysms and showed a high spatial variation. Conclusion: In vivo flow-sensitized 4-D MR imaging can be applied to analyze complex patterns of intraaneurysmal flow. Flow patterns, distribution of flow velocities, and WSS seem to be determined by the vascular geometry of the aneurysm. Temporal and spatial averaging effects are drawbacks of the MR-based analysis of flow patterns as well as the estimation of WSS, particularly in small aneurysms. Further studies are needed to establish a direct link between definitive flow patterns and different aneurysm geometrie

    Visual attention models and applications to 3D computer graphics

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Ph. D.) -- Bilkent University, 2012.Includes bibliographical refences.3D computer graphics, with the increasing technological and computational opportunities, have advanced to very high levels that it is possible to generate very realistic computer-generated scenes in real-time for games and other interactive environments. However, we cannot claim that computer graphics research has reached to its limits. Rendering photo-realistic scenes still cannot be achieved in real-time; and improving visual quality and decreasing computational costs are still research areas of great interest. Recent e orts in computer graphics have been directed towards exploiting principles of human visual perception to increase visual quality of rendering. This is natural since in computer graphics, the main source of evaluation is the judgment of people, which is based on their perception. In this thesis, our aim is to extend the use of perceptual principles in computer graphics. Our contribution is two-fold: First, we present several models to determine the visually important, salient, regions in a 3D scene. Secondly, we contribute to use of de nition of saliency metrics in computer graphics. Human visual attention is composed of two components, the rst component is the stimuli-oriented, bottom-up, visual attention; and the second component is task-oriented, top-down visual attention. The main di erence between these components is the role of the user. In the top-down component, viewer's intention and task a ect perception of the visual scene as opposed to the bottom-up component. We mostly investigate the bottom-up component where saliency resides. We de ne saliency computation metrics for two types of graphical contents. Our rst metric is applicable to 3D mesh models that are possibly animating, and it extracts saliency values for each vertex of the mesh models. The second metric we propose is applicable to animating objects and nds visually important objects due to their motion behaviours. In a third model, we present how to adapt the second metric for the animated 3D meshes. Along with the metrics of saliency, we also present possible application areas and a perceptual method to accelerate stereoscopic rendering, which is based on binocular vision principles and makes use of saliency information in a stereoscopic rendering scene. Each of the proposed models are evaluated with formal experiments. The proposed saliency metrics are evaluated via eye-tracker based experiments and the computationally salient regions are found to attract more attention in practice too. For the stereoscopic optimization part, we have performed a detailed experiment and veri ed our model of optimization. In conclusion, this thesis extends the use of human visual system principles in 3D computer graphics, especially in terms of saliency.Bülbül, Muhammed AbdullahPh.D

    Reflection seismic waveform tomography

    Get PDF
    corecore