65 research outputs found

    Depth of field guided visualisation on light field displays

    Get PDF
    Light field displays are capable of realistic visualization of arbitrary 3D content. However, due to the finite number of light rays reproduced by the display, its bandwidth is limited in terms of angular and spatial resolution. Consequently, 3D content that falls outside of that bandwidth will cause aliasing during visualization. Therefore, a light field to be visualized must be properly preprocessed. In this thesis, we propose three methods that properly filter the parts in the input light field that would cause aliasing. First method is based on a 2D FIR circular filter that is applied over the 4D light field. Second method utilizes the structured nature of the epipolar plane images representing the light field. Third method adopts real-time multi-layer depth-of-field rendering using tiled splatting. We also establish a connection between lens parameters in the proposed depth-of-field rendering and the display’s bandwidth in order to determine the optimal blurring amount. As we prepare light field for light field displays, a stage is added to the proposed real-time rendering pipeline that simultaneously renders adjacent views. The rendering performance of the proposed methods is demonstrated on Holografika’s Holovizio 722RC projection-based light field display

    Transition Contour Synthesis with Dynamic Patch Transitions

    Get PDF
    In this article, we present a novel approach for modulating the shape of transitions between terrain materials to produce detailed and varied contours where blend resolution is limited. Whereas texture splatting and blend mapping add detail to transitions at the texel level, our approach addresses the broader shape of the transition by introducing intermittency and irregularity. Our results have proven that enriched detail of the blend contour can be achieved with a performance competitive to existing approaches without additional texture, geometry resources, or asset preprocessing. We achieve this by compositing blend masks on-the-fly with the subdivision of texture space into differently sized patches to produce irregular contours from minimal artistic input. Our approach is of particular importance for applications where GPU resources or artistic input is limited or impractical

    Real-time transition texture synthesis for terrains.

    Get PDF
    Depicting the transitions where differing material textures meet on a terrain surface presents a particularly unique set of challenges in the field of real-time rendering. Natural landscapes are inherently irregular and composed of complex interactions between many different material types of effectively endless detail and variation. Although consumer grade graphics hardware is becoming ever increasingly powerful with each successive generation, terrain texturing remains a trade-off between realism and the computational resources available. Technological constraints aside, there is still the challenge of generating the texture resources to represent terrain surfaces which can often span many hundreds or even thousands of square kilometres. To produce such textures by hand is often impractical when operating on a restricted budget of time and funding. This thesis presents two novel algorithms for generating texture transitions in realtime using automated processes. The first algorithm, Feature-Based Probability Blending (FBPB), automates the task of generating transitions between material textures containing salient features. As such features protrude through the terrain surface FBPB ensures that the topography of these features is maintained at transitions in a realistic manner. The transitions themselves are generated using a probabilistic process that also dynamically adds wear and tear to introduce high frequency detail and irregularity at the transition contour. The second algorithm, Dynamic Patch Transitions (DPT), extends FBPB by applying the probabilistic transition approach to material textures that contain no salient features. By breaking up texture space into a series of layered patches that are either rendered or discarded on a probabilistic basis, the contour of the transition is greatly increased in resolution and irregularity. When used in conjunction with high frequency detail techniques, such as alpha masking, DPT is capable of producing endless, detailed, irregular transitions without the need for artistic input

    A Framework for Dynamic Terrain with Application in Off-road Ground Vehicle Simulations

    Get PDF
    The dissertation develops a framework for the visualization of dynamic terrains for use in interactive real-time 3D systems. Terrain visualization techniques may be classified as either static or dynamic. Static terrain solutions simulate rigid surface types exclusively; whereas dynamic solutions can also represent non-rigid surfaces. Systems that employ a static terrain approach lack realism due to their rigid nature. Disregarding the accurate representation of terrain surface interaction is rationalized because of the inherent difficulties associated with providing runtime dynamism. Nonetheless, dynamic terrain systems are a more correct solution because they allow the terrain database to be modified at run-time for the purpose of deforming the surface. Many established techniques in terrain visualization rely on invalid assumptions and weak computational models that hinder the use of dynamic terrain. Moreover, many existing techniques do not exploit the capabilities offered by current computer hardware. In this research, we present a component framework for terrain visualization that is useful in research, entertainment, and simulation systems. In addition, we present a novel method for deforming the terrain that can be used in real-time, interactive systems. The development of a component framework unifies disparate works under a single architecture. The high-level nature of the framework makes it flexible and adaptable for developing a variety of systems, independent of the static or dynamic nature of the solution. Currently, there are only a handful of documented deformation techniques and, in particular, none make explicit use of graphics hardware. The approach developed by this research offloads extra work to the graphics processing unit; in an effort to alleviate the overhead associated with deforming the terrain. Off-road ground vehicle simulation is used as an application domain to demonstrate the practical nature of the framework and the deformation technique. In order to realistically simulate terrain surface interactivity with the vehicle, the solution balances visual fidelity and speed. Accurately depicting terrain surface interactivity in off-road ground vehicle simulations improves visual realism; thereby, increasing the significance and worth of the application. Systems in academia, government, and commercial institutes can make use of the research findings to achieve the real-time display of interactive terrain surfaces

    Towards Fully Dynamic Surface Illumination in Real-Time Rendering using Acceleration Data Structures

    Get PDF
    The improvements in GPU hardware, including hardware-accelerated ray tracing, and the push for fully dynamic realistic-looking video games, has been driving more research in the use of ray tracing in real-time applications. The work described in this thesis covers multiple aspects such as optimisations, adapting existing offline methods to real-time constraints, and adding effects which were hard to simulate without the new hardware, all working towards a fully dynamic surface illumination rendering in real-time.Our first main area of research concerns photon-based techniques, commonly used to render caustics. As many photons can be required for a good coverage of the scene, an efficient approach for detecting which ones contribute to a pixel is essential. We improve that process by adapting and extending an existing acceleration data structure; if performance is paramount, we present an approximation which trades off some quality for a 2–3× improvement in rendering time. The tracing of all the photons, and especially when long paths are needed, had become the highest cost. As most paths do not change from frame to frame, we introduce a validation procedure allowing the reuse of as many as possible, even in the presence of dynamic lights and objects. Previous algorithms for associating pixels and photons do not robustly handle specular materials, so we designed an approach leveraging ray tracing hardware to allow for caustics to be visible in mirrors or behind transparent objects.Our second research focus switches from a light-based perspective to a camera-based one, to improve the picking of light sources when shading: photon-based techniques are wonderful for caustics, but not as efficient for direct lighting estimations. When a scene has thousands of lights, only a handful can be evaluated at any given pixel due to time constraints. Current selection methods in video games are fast but at the cost of introducing bias. By adapting an acceleration data structure from offline rendering that stochastically chooses a light source based on its importance, we provide unbiased direct lighting evaluation at about 30 fps. To support dynamic scenes, we organise it in a two-level system making it possible to only update the parts containing moving lights, and in a more efficient way.We worked on top of the new ray tracing hardware to handle lighting situations that previously proved too challenging, and presented optimisations relevant for future algorithms in that space. These contributions will help in reducing some artistic constraints while designing new virtual scenes for real-time applications

    Dr.Bokeh: DiffeRentiable Occlusion-aware Bokeh Rendering

    Full text link
    Bokeh is widely used in photography to draw attention to the subject while effectively isolating distractions in the background. Computational methods simulate bokeh effects without relying on a physical camera lens. However, in the realm of digital bokeh synthesis, the two main challenges for bokeh synthesis are color bleeding and partial occlusion at object boundaries. Our primary goal is to overcome these two major challenges using physics principles that define bokeh formation. To achieve this, we propose a novel and accurate filtering-based bokeh rendering equation and a physically-based occlusion-aware bokeh renderer, dubbed Dr.Bokeh, which addresses the aforementioned challenges during the rendering stage without the need of post-processing or data-driven approaches. Our rendering algorithm first preprocesses the input RGBD to obtain a layered scene representation. Dr.Bokeh then takes the layered representation and user-defined lens parameters to render photo-realistic lens blur. By softening non-differentiable operations, we make Dr.Bokeh differentiable such that it can be plugged into a machine-learning framework. We perform quantitative and qualitative evaluations on synthetic and real-world images to validate the effectiveness of the rendering quality and the differentiability of our method. We show Dr.Bokeh not only outperforms state-of-the-art bokeh rendering algorithms in terms of photo-realism but also improves the depth quality from depth-from-defocus

    Parallel Rendering and Large Data Visualization

    Full text link
    We are living in the big data age: An ever increasing amount of data is being produced through data acquisition and computer simulations. While large scale analysis and simulations have received significant attention for cloud and high-performance computing, software to efficiently visualise large data sets is struggling to keep up. Visualization has proven to be an efficient tool for understanding data, in particular visual analysis is a powerful tool to gain intuitive insight into the spatial structure and relations of 3D data sets. Large-scale visualization setups are becoming ever more affordable, and high-resolution tiled display walls are in reach even for small institutions. Virtual reality has arrived in the consumer space, making it accessible to a large audience. This thesis addresses these developments by advancing the field of parallel rendering. We formalise the design of system software for large data visualization through parallel rendering, provide a reference implementation of a parallel rendering framework, introduce novel algorithms to accelerate the rendering of large amounts of data, and validate this research and development with new applications for large data visualization. Applications built using our framework enable domain scientists and large data engineers to better extract meaning from their data, making it feasible to explore more data and enabling the use of high-fidelity visualization installations to see more detail of the data.Comment: PhD thesi

    Efficient Geometry and Illumination Representations for Interactive Protein Visualization

    Get PDF
    This dissertation explores techniques for interactive simulation and visualization for large protein datasets. My thesis is that using efficient representations for geometric and illumination data can help in developing algorithms that achieve better interactivity for visual and computational proteomics. I show this by developing new algorithms for computation and visualization for proteins. I also show that the same insights that resulted in better algorithms for visual proteomics can also be turned around and used for more efficient graphics rendering. Molecular electrostatics is important for studying the structures and interactions of proteins, and is vital in many computational biology applications, such as protein folding and rational drug design. We have developed a system to efficiently solve the non-linear Poisson-Boltzmann equation governing molecular electrostatics. Our system simultaneously improves the accuracy and the efficiency of the solution by adaptively refining the computational grid near the solute-solvent interface. In addition, we have explored the possibility of mapping the PBE solution onto GPUs. We use pre-computed accumulation of transparency with spherical-harmonics-based compression to accelerate volume rendering of molecular electrostatics. We have also designed a time- and memory-efficient algorithm for interactive visualization of large dynamic molecules. With view-dependent precision control and memory-bandwidth reduction, we have achieved real-time visualization of dynamic molecular datasets with tens of thousands of atoms. Our algorithm is linearly scalable in the size of the molecular datasets. In addition, we present a compact mathematical model to efficiently represent the six-dimensional integrals of bidirectional surface scattering reflectance distribution functions (BSSRDFs) to render scattering effects in translucent materials interactively. Our analysis first reduces the complexity and dimensionality of the problem by decomposing the reflectance field into non-scattered and subsurface-scattered reflectance fields. While the non-scattered reflectance field can be described by 4D bidirectional reflectance distribution functions (BRDFs), we show that the scattered reflectance field can also be represented by a 4D field through pre-processing the neighborhood scattering radiance transfer integrals. We use a novel reference-points scheme to compactly represent the pre-computed integrals using a hierarchical and progressive spherical harmonics representation. Our algorithm scales linearly with the number of mesh vertices

    Game Environment Texturing : Texture Blending and Other Texturing Techniques

    Get PDF
    This thesis demonstrates how real-time game environments can be textured smoothly and efficiently and to pinpoint what kind of methods can be used for this purpose. I point out the limitations of traditional texturing methods and propose more efficient alternatives to them. I have gained a thorough understanding of many texturing methods by studying several sources and by using the methods both at work and in my free time. I discuss the properties as well as the pros and cons of these methods with reference to real-world examples. The thesis reports on the research I did regarding the usage of blend maps in real-time texturing as well as an in-depth look into how I developed a real-time blend shader in Blender, to be used for this purpose, for the Bugbear Entertainment Environment Art team. Finally I present some other alternative texturing methods, such as modular texturing, that I have researched and used in my line of work. I also highlight some promising methods that I have studied during my free time but which I have not tried out myself yet. I have gained an insight into the methods I discuss in the thesis through my work experience at the Bugbear Entertainment and Remedy Entertainment game companies. I mainly use knowledge I have gained during the development of Bugbear Entertainment's Next Car Game: Wreckfest and an unreleased Remedy Entertainment mobile game. The blend material techniques I have used in the thesis are primarily based on the work I did for Bugbear's Next Car Game: Wreckfest and the modular texturing and modeling techniques have mainly been acquired during the development of the mobile game project for Remedy Entertainment. I rely on this work experience and the information I have received from other industry professionals, while simultaneously backing this knowledge with references to several sources. I believe that my research into different texturing methods can be useful for anyone interested in real-time graphics. I would have liked to develop my shader further to be even more in line with the one used at Bugbear, but even at the state it is currently in, I feel that it can be of great help. The shader is freely downloadable from the internet and it can be used inside Blender without restriction. Traditional texture blending is still a viable method for modern texturing even though texture blending has evolved a lot in several ways and a lot of new alternative texturing methods have arisen.Opinnäytetyön tavoitteena on osoittaa, miten reaaliaikaisia peliympäristöjä voidaan teksturoida sujuvasti ja tehokkaasti sekä millaisia menetelmiä tähän tarkoitukseen on olemassa. Havainnollistan perinteisten teksturointimenetelmien rajoituksia ja tarjoan sujuvampia vaihtoehtoja näiden tilalle. Olen tutkinut monia teksturointimenetelmiä kattavasti lukemalla lukuisia lähteitä ja käyttämällä useita niistä sekä työssä että vapaaajalla. Käyn läpi näiden eri menetelmien ominaisuuksia sekä niiden hyviä ja huonoja puolia esimerkkien kautta havainnollistaen. Pääasiallisena aiheena käyn läpi tutkimustani blendkarttojen käytettävyydestä reaaliaikaisen teksturoinnin yhteydessä, sekä paneudun syvemmin siihen, miten kehitin reaaliaikaisen blendkarttoja hyödyntävän varjostinohjelman Blenderissä Bugbear Entertainment Ltd:n Environment Art -tiimille. Lopuksi esittelen myös muutamia muita vaihtoehtoisia teksturointimenetelmiä, kuten esimerkiksi modulaarista teksturointia, joita olen tutkinut ja käyttänyt työelämässäni. Esittelen myös joitakin lupaavia menetelmiä, joita olen tutkinut vapaa-ajallani, vaikka en vielä ole itse niitä käytännössä soveltanut. Monet opinnäytetyössä esittämäni menetelmät ovat tulleet tutuiksi työskennellessäni Bugbear Entertainment- ja Remedy Entertainment peliyhtiöissä. Käytän pääasiassa tietoa, jota olen oppinut Bugbear Entertainmentin Next Car Game: Wreckfestin sekä Remedy Entertainmentin julkaisemattoman mobiilipelin kehityksen aikana. Opinnäytetyössäni käyttämäni blendmateriaalitekniikat perustuvat pääasiassa työhöni Bugbearin Next Car Game: Wreckfest -pelin parissa ja modulaariset teksturointi- ja mallintamistyötapani ovat kehittyneet Remedy Entertainmentin mobiilipeliprojektin tuotannon aikana. Opinnäytetyössäni hyödynnän muun muassa tätä työkokemustani ja saamaani opetusta muilta alan ammattilaisilta vahvistaen tietojani eri lähteiden avulla. Uskon tutkimustyöni eri teksturointimenetelmistä olevan hyötyä kenelle tahansa reaaliaikaisesta grafiikasta kiinnostuneelle. Olisin halunnut jatkaa varjostimeni kehitystyötä, jotta se olisi vielä yhtenäisempi Bugbearin blendvarjostimeen, mutta uskon, että tässäkin kehitysvaiheessa siitä voi olla suurta apua. Kehittämäni varjostin on vapaasti ladattavissa internetistä ja sitä voi käyttää Blender-ohjelmassa rajoituksetta. Siitä huolimatta, että tekstuuriblendaus on kehittynyt monella tapaa ja monia uusia menetelmiä on ilmestynyt, perinteinen tekstuuriblendaus on yhä varteenotettava menetelmä modernissa teksturoinnissa
    corecore