22,456 research outputs found
A directional occlusion shading model for interactive direct volume rendering
Volumetric rendering is widely used to examine 3D scalar fields from CT/MRI scanners and numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide perceptual cues to aid in understanding structure contained in the data. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre)computation. In this paper, a shading model for interactive direct volume rendering is proposed that provides perceptual cues similar to those of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. The method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions while modifications to the volume via clipping planes are incorporated into the resulting occlusion-based shading
NaRPA: Navigation and Rendering Pipeline for Astronautics
This paper presents Navigation and Rendering Pipeline for Astronautics
(NaRPA) - a novel ray-tracing-based computer graphics engine to model and
simulate light transport for space-borne imaging. NaRPA incorporates lighting
models with attention to atmospheric and shading effects for the synthesis of
space-to-space and ground-to-space virtual observations. In addition to image
rendering, the engine also possesses point cloud, depth, and contour map
generation capabilities to simulate passive and active vision-based sensors and
to facilitate the designing, testing, or verification of visual navigation
algorithms. Physically based rendering capabilities of NaRPA and the efficacy
of the proposed rendering algorithm are demonstrated using applications in
representative space-based environments. A key demonstration includes NaRPA as
a tool for generating stereo imagery and application in 3D coordinate
estimation using triangulation. Another prominent application of NaRPA includes
a novel differentiable rendering approach for image-based attitude estimation
is proposed to highlight the efficacy of the NaRPA engine for simulating
vision-based navigation and guidance operations.Comment: 49 pages, 22 figure
Image synthesis based on a model of human vision
Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading.
However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer.
This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach.
A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures.
A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering.
This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision
Gradient Domain Methods for Image-based Reconstruction and Rendering
This thesis describes new approaches in image-based 3D reconstruction and rendering. In contrast to previous work our algorithms focus on image gradients instead of pixel values which allows us to avoid many of the disadvantages traditional techniques have. A single pixel only carries very local information about the image content. A gradient on the other hand reveals information about the magnitude and the direction in which the image content changes. Our techniques use this additional information to adapt dynamically to the image content. Especially in image regions without strong gradients we can employ more suitable reconstruction models and we can render images with less artifacts. Overall we present more accurate and robust results (both 3D models and renderings) compared to previous methods.
First, we present a multi-view stereo algorithm that combines traditional stereo reconstruction and shading based reconstruction models in a single optimization scheme. By defining as gradient based trade off our model removes the need for an explicit regularization and can handle shading information without the need for an explicit albedo model. This effectively combines the strength of both reconstruction approaches and cancels out their weaknesses.
Our second method is an image-based rendering technique that directly renders gradients instead of pixels. The final image is then generated by integrating over the rendered gradients. We present a detailed description on how gradients can be moved directly in the image during rendering which allows us to create a fast approximation that improves the quality and speed of the integration step. Our method also handles occlusions and compared to traditional approaches we can achieve better results that are especially robust for scenes with reflective or textureless areas.
Finally, we also present a new model for image warping. Here we apply different types of regularization constraints based on the gradients in the image. Especially when used for direct real-time rendering this can handle larger distortions compared to traditional methods that use only a single type of regularization.
Overall the results of this thesis show how shifting the focus from image pixels to image gradients can improve various aspects of image-based reconstruction and rendering. Some of the most challenging aspects such as textureless areas in rendering and spatially varying albedo in reconstruction are handled implicitly by our formulations which also leads to more effective algorithms
CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering
Intrinsic image decomposition is a challenging, long-standing computer vision
problem for which ground truth data is very difficult to acquire. We explore
the use of synthetic data for training CNN-based intrinsic image decomposition
models, then applying these learned models to real-world images. To that end,
we present \ICG, a new, large-scale dataset of physically-based rendered images
of scenes with full ground truth decompositions. The rendering process we use
is carefully designed to yield high-quality, realistic images, which we find to
be crucial for this problem domain. We also propose a new end-to-end training
method that learns better decompositions by leveraging \ICG, and optionally IIW
and SAW, two recent datasets of sparse annotations on real-world images.
Surprisingly, we find that a decomposition network trained solely on our
synthetic data outperforms the state-of-the-art on both IIW and SAW, and
performance improves even further when IIW and SAW data is added during
training. Our work demonstrates the suprising effectiveness of
carefully-rendered synthetic data for the intrinsic images task.Comment: Paper for 'CGIntrinsics: Better Intrinsic Image Decomposition through
Physically-Based Rendering' published in ECCV, 201
SfSNet: Learning Shape, Reflectance and Illuminance of Faces in the Wild
We present SfSNet, an end-to-end learning framework for producing an accurate
decomposition of an unconstrained human face image into shape, reflectance and
illuminance. SfSNet is designed to reflect a physical lambertian rendering
model. SfSNet learns from a mixture of labeled synthetic and unlabeled real
world images. This allows the network to capture low frequency variations from
synthetic and high frequency details from real images through the photometric
reconstruction loss. SfSNet consists of a new decomposition architecture with
residual blocks that learns a complete separation of albedo and normal. This is
used along with the original image to predict lighting. SfSNet produces
significantly better quantitative and qualitative results than state-of-the-art
methods for inverse rendering and independent normal and illumination
estimation.Comment: Accepted to CVPR 2018 (Spotlight
Progressive refinement rendering of implicit surfaces
The visualisation of implicit surfaces can be an inefficient task when such surfaces are complex and highly detailed. Visualising a surface by first converting it to a
polygon mesh may lead to an excessive polygon count. Visualising a surface by direct ray casting is often a slow procedure. In this paper we present a progressive refinement renderer for implicit surfaces that are Lipschitz continuous. The renderer first displays a low resolution estimate of what the final image is going to be and, as the computation progresses, increases the quality of this estimate at an interactive frame rate. This renderer provides a quick previewing facility that significantly reduces the design cycle of a new and complex implicit surface. The renderer is also capable of completing an image faster than a conventional implicit surface rendering algorithm based on ray casting
Neural Face Editing with Intrinsic Image Disentangling
Traditional face editing methods often require a number of sophisticated and
task specific algorithms to be applied one after the other --- a process that
is tedious, fragile, and computationally intensive. In this paper, we propose
an end-to-end generative adversarial network that infers a face-specific
disentangled representation of intrinsic face properties, including shape (i.e.
normals), albedo, and lighting, and an alpha matte. We show that this network
can be trained on "in-the-wild" images by incorporating an in-network
physically-based image formation module and appropriate loss functions. Our
disentangling latent representation allows for semantically relevant edits,
where one aspect of facial appearance can be manipulated while keeping
orthogonal properties fixed, and we demonstrate its use for a number of facial
editing applications.Comment: CVPR 2017 ora
- …