4,111 research outputs found
Mobile graphics: SIGGRAPH Asia 2017 course
Peer ReviewedPostprint (published version
Doctor of Philosophy
dissertationReal-time global illumination is the next frontier in real-time rendering. In an attempt to generate realistic images, games have followed the film industry into physically based shading and will soon begin integrating global illumination techniques. Traditional methods require too much memory and too much time to compute for real-time use. With Modular and Delta Radiance Transfer we precompute a scene-independent, low-frequency basis that allows us to calculate complex indirect lighting calculations in a much lower dimensional subspace with a reduced memory footprint and real-time execution. The results are then applied as a light map on many different scenes. To improve the low frequency results, we also introduce a novel screen space ambient occlusion technique that allows us to generate a smoother result with fewer samples. These three techniques, low and high frequency used together, provide a viable indirect lighting solution that can be run in milliseconds on today's hardware, providing a useful new technique for indirect lighting in real-time graphics
Deep-learning the Latent Space of Light Transport
We suggest a method to directly deepâlearn light transport, i. e., the mapping from a 3D geometryâilluminationâmaterial configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semiâtransparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a twoâstage operator comprising a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3Dâ2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stageâoperator serves as a valuable extension to modern deferred shading approaches
Frequency Based Radiance Cache for Rendering Animations
International audienceWe propose a method to render animation sequences with direct distant lighting that only shades a fraction of the total pixels. We leverage frequency-based analyses of light transport to determine shading and image sampling rates across an animation using a samples cache. To do so, we derive frequency bandwidths that account for the complexity of distant lights, visibility, BRDF, and temporal coherence during animation. We finaly apply a cross-bilateral filter when rendering our final images from sparse sets of shading points placed according to our frequency-based oracles (generally < 25% of the pixels, per frame)
Gaussian Shadow Casting for Neural Characters
Neural character models can now reconstruct detailed geometry and texture
from video, but they lack explicit shadows and shading, leading to artifacts
when generating novel views and poses or during relighting. It is particularly
difficult to include shadows as they are a global effect and the required
casting of secondary rays is costly. We propose a new shadow model using a
Gaussian density proxy that replaces sampling with a simple analytic formula.
It supports dynamic motion and is tailored for shadow computation, thereby
avoiding the affine projection approximation and sorting required by the
closely related Gaussian splatting. Combined with a deferred neural rendering
model, our Gaussian shadows enable Lambertian shading and shadow casting with
minimal overhead. We demonstrate improved reconstructions, with better
separation of albedo, shading, and shadows in challenging outdoor scenes with
direct sun light and hard shadows. Our method is able to optimize the light
direction without any input from the user. As a result, novel poses have fewer
shadow artifacts and relighting in novel scenes is more realistic compared to
the state-of-the-art methods, providing new ways to pose neural characters in
novel environments, increasing their applicability.Comment: 14 pages, 13 figure
CurriculumLoc: Enhancing Cross-Domain Geolocalization through Multi-Stage Refinement
Visual geolocalization is a cost-effective and scalable task that involves
matching one or more query images, taken at some unknown location, to a set of
geo-tagged reference images. Existing methods, devoted to semantic features
representation, evolving towards robustness to a wide variety between query and
reference, including illumination and viewpoint changes, as well as scale and
seasonal variations. However, practical visual geolocalization approaches need
to be robust in appearance changing and extreme viewpoint variation conditions,
while providing accurate global location estimates. Therefore, inspired by
curriculum design, human learn general knowledge first and then delve into
professional expertise. We first recognize semantic scene and then measure
geometric structure. Our approach, termed CurriculumLoc, involves a delicate
design of multi-stage refinement pipeline and a novel keypoint detection and
description with global semantic awareness and local geometric verification. We
rerank candidates and solve a particular cross-domain perspective-n-point (PnP)
problem based on these keypoints and corresponding descriptors, position
refinement occurs incrementally. The extensive experimental results on our
collected dataset, TerraTrack and a benchmark dataset, ALTO, demonstrate that
our approach results in the aforementioned desirable characteristics of a
practical visual geolocalization solution. Additionally, we achieve new high
recall@1 scores of 62.6% and 94.5% on ALTO, with two different distances
metrics, respectively. Dataset, code and trained models are publicly available
on https://github.com/npupilab/CurriculumLoc.Comment: 14 pages, 15 figure
Ambient occlusion and shadows for molecular graphics
Computer based visualisations of molecules have been produced as early as the 1950s to aid researchers in their understanding of biomolecular structures. An important consideration for Molecular Graphics software is the ability to visualise the 3D structure of the molecule in a clear manner.
Recent advancements in computer graphics have led to improved rendering capabilities of the visualisation tools. The capabilities of current shading languages allow the inclusion of advanced graphic effects such as ambient occlusion and shadows that
greatly improve the comprehension of the 3D shapes of the molecules.
This thesis focuses on finding improved solutions to the real time rendering of Molecular Graphics on modern day computers. The methods of calculating ambient occlusion and both hard and soft shadows are examined and implemented to give the user a more complete experience when navigating large molecular structures
Ray Tracing Gems
This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU
QRsens:dual-purpose quick response code with built-in colorimetric sensors
QRsens represents a family of Quick Response (QR) sensing codes for in-situ air analysis with a customized smartphone application to simultaneously read the QR code and the colorimetric sensors. Five colorimetric sensors (temperature, relative humidity (RH), and three gas sensors (COâ, NHâ and HâS)) were designed with the aim of proposing two end-use applications for ambient analysis, i.e., enclosed spaces monitoring, and smart packaging. Both QR code and colorimetric sensing inks were deposited by standard screen printing on white paper. To ensure minimal ambient light dependence of QRsens during the real-time analysis, the smartphone application was programmed for an effective colour correction procedure based on black and white references for three standard illumination temperatures (3000, 4000 and 5000 K). Depending on the type of sensor being analysed, this integration achieved a reduction of âŒ71 â 87% of QRsens's dependence on the light temperature. After the illumination colour correction, colorimetric gas sensors exhibited a detection range of 0.7â4.1%, 0.7â7.5 ppm, and 0.13â0.7 ppm for CO2, NH3 and H2S, respectively. In summary, the study presents an affordable built-in multi-sensing platform in the form of QRsens for in-situ monitoring with potential in different types of ambient air analysis applications
- âŠ