6,307 research outputs found
Interactive Vegetation Rendering with Slicing and Blending
Detailed and interactive 3D rendering of vegetation is one of the challenges of traditional polygon-oriented computer graphics, due to large geometric complexity even of simple plants. In this paper we introduce a simplified image-based rendering approach based solely on alpha-blended textured polygons. The simplification is based on the limitations of human perception of complex geometry. Our approach renders dozens of detailed trees in real-time with off-the-shelf hardware, while providing significantly improved image quality over existing real-time techniques. The method is based on using ordinary mesh-based rendering for the solid parts of a tree, its trunk and limbs. The sparse parts of a tree, its twigs and leaves, are instead represented with a set of slices, an image-based representation. A slice is a planar layer, represented with an ordinary alpha or color-keyed texture; a set of parallel slices is a slicing. Rendering from an arbitrary viewpoint in a 360 degree circle around the center of a tree is achieved by blending between the nearest two slicings. In our implementation, only 6 slicings with 5 slices each are sufficient to visualize a tree for a moving or stationary observer with the perceptually similar quality as the original model
MVF-Net: Multi-View 3D Face Morphable Model Regression
We address the problem of recovering the 3D geometry of a human face from a
set of facial images in multiple views. While recent studies have shown
impressive progress in 3D Morphable Model (3DMM) based facial reconstruction,
the settings are mostly restricted to a single view. There is an inherent
drawback in the single-view setting: the lack of reliable 3D constraints can
cause unresolvable ambiguities. We in this paper explore 3DMM-based shape
recovery in a different setting, where a set of multi-view facial images are
given as input. A novel approach is proposed to regress 3DMM parameters from
multi-view inputs with an end-to-end trainable Convolutional Neural Network
(CNN). Multiview geometric constraints are incorporated into the network by
establishing dense correspondences between different views leveraging a novel
self-supervised view alignment loss. The main ingredient of the view alignment
loss is a differentiable dense optical flow estimator that can backpropagate
the alignment errors between an input view and a synthetic rendering from
another input view, which is projected to the target view through the 3D shape
to be inferred. Through minimizing the view alignment loss, better 3D shapes
can be recovered such that the synthetic projections from one view to another
can better align with the observed image. Extensive experiments demonstrate the
superiority of the proposed method over other 3DMM methods.Comment: 2019 Conference on Computer Vision and Pattern Recognitio
TextureGAN: Controlling Deep Image Synthesis with Texture Patches
In this paper, we investigate deep image synthesis guided by sketch, color,
and texture. Previous image synthesis methods can be controlled by sketch and
color strokes but we are the first to examine texture control. We allow a user
to place a texture patch on a sketch at arbitrary locations and scales to
control the desired output texture. Our generative network learns to synthesize
objects consistent with these texture suggestions. To achieve this, we develop
a local texture loss in addition to adversarial and content loss to train the
generative network. We conduct experiments using sketches generated from real
images and textures sampled from a separate texture database and results show
that our proposed algorithm is able to generate plausible images that are
faithful to user controls. Ablation studies show that our proposed pipeline can
generate more realistic images than adapting existing methods directly.Comment: CVPR 2018 spotligh
Using Different Data Sources for New Findings in Visualization of Highly Detailed Urban Data
Measurement of infrastructure has highly evolved in the last years. Scanning systems became more precise
and many methods were found to add and improve content created for the analysis of buildings and
landscapes. Therefore the pure amount of data increased significantly and new algorithms had to be found to
visualize these data for further exploration. Additionally many data types and formats originate from
different sources, such as Dibits hybrid scanning systems delivering laser-scanned point clouds and
photogrammetric texture images. These are usually analyzed separately. Combinations of different types of
data are not widely used but might lead to new findings and improved data exploration.
In our work we use different data formats like meshes, unprocessed point clouds and polylines in tunnel
visualization to give experts a tool to explore existing datasets in depth with a wide variety of possibilities.
The diverse creation of datasets leads to new challenges for preprocessing, out-of-core rendering and
efficient fusion of this varying information. Interactive analysis of different formats of data also has to have
several approaches and is usually difficult to merge into one application.
In this paper we describe the challenges and advantages of the combination of different data sources in
tunnel visualization. Large meshes with high resolution textures are merged with dense point clouds and
additional measurements. Interactive analysis can also create additional information, which has to be
integrated precisely to prevent errors and misinterpretation. We present the basic algorithms used for
heterogeneous data formats, how we combined them and what advantages are created by our methods.
Several datasets evolve over time. This dynamic is also considered in our visualization and analysis methods
to enable change detection. For tunnel monitoring this allows to investigate the entire history of the
construction project and helps to make better informed decisions in the preceding construction phases or for
repairs.
Several methods are merged like the data they are based on enabling new ways of data exploration. In
analyzing this new approach to look at heterogeneous datasets we come to the conclusion that the
combination of different sources leads to a better solution than the sum of its parts
- …